Jan 21 14:30:53 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 14:30:53 crc restorecon[4744]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 14:30:53 crc restorecon[4744]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 14:30:54 crc kubenswrapper[4834]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 14:30:54 crc kubenswrapper[4834]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 14:30:54 crc kubenswrapper[4834]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 14:30:54 crc kubenswrapper[4834]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 14:30:54 crc kubenswrapper[4834]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 14:30:54 crc kubenswrapper[4834]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.151317 4834 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154734 4834 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154799 4834 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154806 4834 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154812 4834 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154818 4834 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154824 4834 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154831 4834 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154837 4834 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154843 4834 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154850 4834 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154855 4834 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154861 4834 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154866 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154882 4834 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154889 4834 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154893 4834 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154898 4834 feature_gate.go:330] unrecognized feature gate: Example Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154903 4834 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154908 4834 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154912 4834 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154917 4834 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154922 4834 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154927 4834 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154949 4834 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154955 4834 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154959 4834 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154964 4834 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154969 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154974 4834 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154979 4834 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154985 4834 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154990 4834 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.154994 4834 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155000 4834 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155005 4834 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155010 4834 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155016 4834 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155021 4834 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155027 4834 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155032 4834 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155039 4834 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155050 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155056 4834 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155062 4834 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155067 4834 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155071 4834 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155080 4834 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155087 4834 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155092 4834 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155100 4834 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155105 4834 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155111 4834 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155116 4834 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155121 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155126 4834 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155131 4834 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155136 4834 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155141 4834 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155146 4834 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155152 4834 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155157 4834 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155163 4834 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155169 4834 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155175 4834 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155181 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155187 4834 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155192 4834 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155198 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155203 4834 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155208 4834 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.155217 4834 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155316 4834 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155331 4834 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155352 4834 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155360 4834 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155368 4834 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155374 4834 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155383 4834 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155392 4834 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155399 4834 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155406 4834 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155414 4834 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155421 4834 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155428 4834 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155434 4834 flags.go:64] FLAG: --cgroup-root="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155442 4834 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155448 4834 flags.go:64] FLAG: --client-ca-file="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155453 4834 flags.go:64] FLAG: --cloud-config="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155459 4834 flags.go:64] FLAG: --cloud-provider="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155464 4834 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155472 4834 flags.go:64] FLAG: --cluster-domain="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155477 4834 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155483 4834 flags.go:64] FLAG: --config-dir="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155489 4834 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155495 4834 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155503 4834 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155509 4834 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155514 4834 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155520 4834 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155526 4834 flags.go:64] FLAG: --contention-profiling="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155532 4834 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155539 4834 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155545 4834 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155550 4834 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155559 4834 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155569 4834 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155575 4834 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155581 4834 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155587 4834 flags.go:64] FLAG: --enable-server="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155592 4834 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155602 4834 flags.go:64] FLAG: --event-burst="100" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155609 4834 flags.go:64] FLAG: --event-qps="50" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155615 4834 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155621 4834 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155627 4834 flags.go:64] FLAG: --eviction-hard="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155634 4834 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155641 4834 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155646 4834 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155653 4834 flags.go:64] FLAG: --eviction-soft="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155659 4834 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155665 4834 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155673 4834 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155679 4834 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155686 4834 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155693 4834 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155699 4834 flags.go:64] FLAG: --feature-gates="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155707 4834 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155714 4834 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155720 4834 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155726 4834 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155732 4834 flags.go:64] FLAG: --healthz-port="10248" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155738 4834 flags.go:64] FLAG: --help="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155745 4834 flags.go:64] FLAG: --hostname-override="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155751 4834 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155757 4834 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155762 4834 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155768 4834 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155779 4834 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155785 4834 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155791 4834 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155797 4834 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155803 4834 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155810 4834 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155817 4834 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155824 4834 flags.go:64] FLAG: --kube-reserved="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155830 4834 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155836 4834 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155843 4834 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155849 4834 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155854 4834 flags.go:64] FLAG: --lock-file="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155860 4834 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155866 4834 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155872 4834 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155881 4834 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155887 4834 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155893 4834 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155899 4834 flags.go:64] FLAG: --logging-format="text" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155906 4834 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155912 4834 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155918 4834 flags.go:64] FLAG: --manifest-url="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155924 4834 flags.go:64] FLAG: --manifest-url-header="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155952 4834 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155960 4834 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155968 4834 flags.go:64] FLAG: --max-pods="110" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155974 4834 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155980 4834 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155987 4834 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.155992 4834 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156000 4834 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156010 4834 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156017 4834 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156031 4834 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156037 4834 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156044 4834 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156050 4834 flags.go:64] FLAG: --pod-cidr="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156056 4834 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156065 4834 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156071 4834 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156077 4834 flags.go:64] FLAG: --pods-per-core="0" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156084 4834 flags.go:64] FLAG: --port="10250" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156090 4834 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156096 4834 flags.go:64] FLAG: --provider-id="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156101 4834 flags.go:64] FLAG: --qos-reserved="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156107 4834 flags.go:64] FLAG: --read-only-port="10255" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156113 4834 flags.go:64] FLAG: --register-node="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156119 4834 flags.go:64] FLAG: --register-schedulable="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156124 4834 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156135 4834 flags.go:64] FLAG: --registry-burst="10" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156141 4834 flags.go:64] FLAG: --registry-qps="5" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156147 4834 flags.go:64] FLAG: --reserved-cpus="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156153 4834 flags.go:64] FLAG: --reserved-memory="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156161 4834 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156167 4834 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156175 4834 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156182 4834 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156189 4834 flags.go:64] FLAG: --runonce="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156194 4834 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156200 4834 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156207 4834 flags.go:64] FLAG: --seccomp-default="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156213 4834 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156219 4834 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156232 4834 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156238 4834 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156245 4834 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156253 4834 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156260 4834 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156265 4834 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156271 4834 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156277 4834 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156283 4834 flags.go:64] FLAG: --system-cgroups="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156289 4834 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156299 4834 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156305 4834 flags.go:64] FLAG: --tls-cert-file="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156311 4834 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156319 4834 flags.go:64] FLAG: --tls-min-version="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156326 4834 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156332 4834 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156339 4834 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156346 4834 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156352 4834 flags.go:64] FLAG: --v="2" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156361 4834 flags.go:64] FLAG: --version="false" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156368 4834 flags.go:64] FLAG: --vmodule="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156375 4834 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156382 4834 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156522 4834 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156532 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156539 4834 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156545 4834 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156551 4834 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156558 4834 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156564 4834 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156569 4834 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156574 4834 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156585 4834 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156590 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156596 4834 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156601 4834 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156607 4834 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156612 4834 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156617 4834 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156622 4834 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156628 4834 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156634 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156639 4834 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156646 4834 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156653 4834 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156660 4834 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156666 4834 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156672 4834 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156677 4834 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156682 4834 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156687 4834 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156692 4834 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156697 4834 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156702 4834 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156706 4834 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156711 4834 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156716 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156721 4834 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156725 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156730 4834 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156735 4834 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156741 4834 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156747 4834 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156752 4834 feature_gate.go:330] unrecognized feature gate: Example Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156765 4834 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156771 4834 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156776 4834 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156781 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156785 4834 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156791 4834 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156796 4834 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156801 4834 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156806 4834 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156812 4834 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156817 4834 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156822 4834 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156827 4834 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156832 4834 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156837 4834 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156842 4834 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156848 4834 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156855 4834 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156862 4834 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156868 4834 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156873 4834 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156880 4834 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156886 4834 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156892 4834 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156897 4834 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156902 4834 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156906 4834 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156911 4834 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156915 4834 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.156920 4834 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.156957 4834 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.173310 4834 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.173371 4834 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173556 4834 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173572 4834 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173583 4834 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173594 4834 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173603 4834 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173613 4834 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173621 4834 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173630 4834 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173639 4834 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173649 4834 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173660 4834 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173669 4834 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173679 4834 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173688 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173697 4834 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173706 4834 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173714 4834 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173726 4834 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173737 4834 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173747 4834 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173756 4834 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173765 4834 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173773 4834 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173785 4834 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173799 4834 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173811 4834 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173819 4834 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173828 4834 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173837 4834 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173845 4834 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173853 4834 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173862 4834 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173870 4834 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173900 4834 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173912 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173961 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173977 4834 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.173991 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174002 4834 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174011 4834 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174020 4834 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174029 4834 feature_gate.go:330] unrecognized feature gate: Example Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174038 4834 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174046 4834 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174055 4834 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174063 4834 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174072 4834 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174080 4834 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174089 4834 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174098 4834 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174106 4834 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174114 4834 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174122 4834 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174131 4834 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174140 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174148 4834 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174156 4834 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174165 4834 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174173 4834 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174182 4834 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174191 4834 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174200 4834 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174210 4834 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174218 4834 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174226 4834 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174235 4834 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174244 4834 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174254 4834 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174262 4834 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174271 4834 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174280 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.174295 4834 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174609 4834 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174627 4834 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174638 4834 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174649 4834 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174658 4834 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174667 4834 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174675 4834 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174684 4834 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174693 4834 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174701 4834 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174709 4834 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174719 4834 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174728 4834 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174737 4834 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174745 4834 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174753 4834 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174763 4834 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174771 4834 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174781 4834 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174789 4834 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174798 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174806 4834 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174815 4834 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174824 4834 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174832 4834 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174841 4834 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174849 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174858 4834 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174866 4834 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174874 4834 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174884 4834 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174894 4834 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174975 4834 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174988 4834 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.174999 4834 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175010 4834 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175020 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175034 4834 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175044 4834 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175055 4834 feature_gate.go:330] unrecognized feature gate: Example Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175065 4834 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175075 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175085 4834 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175095 4834 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175105 4834 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175114 4834 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175123 4834 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175132 4834 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175142 4834 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175151 4834 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175160 4834 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175169 4834 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175177 4834 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175186 4834 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175194 4834 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175204 4834 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175212 4834 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175221 4834 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175232 4834 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175242 4834 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175251 4834 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175259 4834 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175268 4834 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175276 4834 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175285 4834 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175293 4834 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175304 4834 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175315 4834 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175326 4834 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175336 4834 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.175347 4834 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.175360 4834 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.175678 4834 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.180347 4834 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.180517 4834 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.181448 4834 server.go:997] "Starting client certificate rotation" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.181502 4834 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.181915 4834 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-08 11:20:34.977610322 +0000 UTC Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.182069 4834 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.190156 4834 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.192943 4834 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.193660 4834 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.206765 4834 log.go:25] "Validated CRI v1 runtime API" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.222325 4834 log.go:25] "Validated CRI v1 image API" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.224277 4834 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.227044 4834 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-14-26-05-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.227099 4834 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.253612 4834 manager.go:217] Machine: {Timestamp:2026-01-21 14:30:54.252319043 +0000 UTC m=+0.226668108 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b40d490c-65ad-4102-a086-7d2250750f42 BootID:d3596a4c-1b27-4372-98f4-5a8df0ab061a Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:27:3c:c5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:27:3c:c5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d2:47:d4 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:22:fb:82 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:56:5e:17 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:50:91:c9 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:12:2c:78 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b6:d1:19:1c:bc:45 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:74:e3:4b:e4:20 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.253873 4834 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.254084 4834 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.254444 4834 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.254608 4834 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.254649 4834 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.254853 4834 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.254864 4834 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.255057 4834 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.255097 4834 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.255493 4834 state_mem.go:36] "Initialized new in-memory state store" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.255607 4834 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.256486 4834 kubelet.go:418] "Attempting to sync node with API server" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.256512 4834 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.256542 4834 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.256556 4834 kubelet.go:324] "Adding apiserver pod source" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.256573 4834 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.258682 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.258792 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.258683 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.258912 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.258994 4834 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.259449 4834 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.260376 4834 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261019 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261045 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261053 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261063 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261077 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261085 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261094 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261106 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261119 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261129 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261153 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261164 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.261425 4834 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.262034 4834 server.go:1280] "Started kubelet" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.262388 4834 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.262420 4834 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.262496 4834 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.263131 4834 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.263900 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.263974 4834 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.264005 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:14:55.256217802 +0000 UTC Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.264183 4834 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.264223 4834 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.264209 4834 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.264315 4834 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 14:30:54 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.265483 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.265645 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.265595 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="200ms" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.266219 4834 factory.go:55] Registering systemd factory Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.266242 4834 factory.go:221] Registration of the systemd container factory successfully Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.265415 4834 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.45:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc573acdeae91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:30:54.261948049 +0000 UTC m=+0.236297134,LastTimestamp:2026-01-21 14:30:54.261948049 +0000 UTC m=+0.236297134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.267076 4834 factory.go:153] Registering CRI-O factory Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.267388 4834 factory.go:221] Registration of the crio container factory successfully Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.267461 4834 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.267486 4834 factory.go:103] Registering Raw factory Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.267502 4834 manager.go:1196] Started watching for new ooms in manager Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.268193 4834 manager.go:319] Starting recovery of all containers Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.274275 4834 server.go:460] "Adding debug handlers to kubelet server" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278533 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278643 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278658 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278670 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278681 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278691 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278703 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278712 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278725 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278736 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278747 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278757 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278766 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278779 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278789 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278798 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278846 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278880 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278890 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278899 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278908 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278919 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278943 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.278989 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279014 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279025 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279038 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279050 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279060 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279069 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279079 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279089 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279099 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279108 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279119 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279128 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279159 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279169 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279181 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279191 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279202 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279212 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279227 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279238 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279248 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279257 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279267 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279277 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279286 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279297 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279307 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279318 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279333 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279343 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279355 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279364 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279373 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279381 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279391 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279400 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279409 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279419 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279429 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279439 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279448 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279459 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279469 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279478 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279488 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279498 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279507 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279517 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279526 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279539 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279550 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279559 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279569 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279579 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279589 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279599 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279610 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279619 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279631 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279641 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279652 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279663 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279674 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279684 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279695 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279706 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279719 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279761 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279770 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279782 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279792 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279804 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279815 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279826 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279836 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279847 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279858 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279869 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279879 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279890 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279908 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279922 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279980 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.279994 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280006 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280019 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280032 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280044 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280057 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280069 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280081 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280092 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280106 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280116 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280129 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280141 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280152 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280163 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280176 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280186 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280197 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280207 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280218 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280229 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280240 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280252 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280263 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280274 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280284 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280294 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280306 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280317 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280327 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280338 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280349 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280359 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280371 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280380 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280391 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280401 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280412 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280424 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280435 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280445 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280456 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280466 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280478 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280488 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280500 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280513 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280525 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280536 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280549 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280560 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280571 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280581 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280591 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280600 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280610 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280620 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.280630 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281255 4834 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281283 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281313 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281327 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281342 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281356 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281369 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281381 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281394 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281411 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281423 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281436 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281453 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281466 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281481 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281495 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281510 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281525 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281541 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281554 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281572 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281587 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281609 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281623 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281666 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281703 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281718 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281732 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281745 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281758 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281771 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281784 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281797 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281811 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281826 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281842 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281857 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281869 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281882 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281893 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281906 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281919 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281952 4834 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281964 4834 reconstruct.go:97] "Volume reconstruction finished" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.281974 4834 reconciler.go:26] "Reconciler: start to sync state" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.299330 4834 manager.go:324] Recovery completed Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.312086 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.317827 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.317904 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.318079 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.319611 4834 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.319644 4834 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.319677 4834 state_mem.go:36] "Initialized new in-memory state store" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.320224 4834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.323208 4834 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.323272 4834 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.323313 4834 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.324035 4834 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.324143 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.324229 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.364990 4834 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.425291 4834 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.465754 4834 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.467622 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="400ms" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.484501 4834 policy_none.go:49] "None policy: Start" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.486276 4834 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.486322 4834 state_mem.go:35] "Initializing new in-memory state store" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.544004 4834 manager.go:334] "Starting Device Plugin manager" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.544214 4834 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.544262 4834 server.go:79] "Starting device plugin registration server" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.545155 4834 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.545193 4834 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.545466 4834 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.545709 4834 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.545736 4834 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.554530 4834 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.626186 4834 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.626816 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.628816 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.628882 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.628961 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.629193 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.629584 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.629805 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.630879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.630924 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.630958 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.631132 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.631422 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.631636 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.631807 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.631661 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.632156 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.632580 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.632630 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.632641 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.632801 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.632903 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.632991 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.633719 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.633740 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.633751 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.634158 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.634184 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.634194 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.634364 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.634521 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.634567 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.634796 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.634823 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.634841 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.635343 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.635420 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.635446 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.636100 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.636169 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.636258 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.636293 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.636308 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.637387 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.637457 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.637483 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.646168 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.647620 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.647694 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.647714 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.647780 4834 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.648523 4834 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.45:6443: connect: connection refused" node="crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688232 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688307 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688360 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688454 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688512 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688574 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688615 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688658 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688705 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688741 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688798 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688879 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.688973 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.689031 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.689083 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791183 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791262 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791291 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791318 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791342 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791370 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791418 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791442 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791475 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791491 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791571 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791641 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791548 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791686 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791612 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791722 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791752 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791762 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791772 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791820 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791813 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791881 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791882 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.791988 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.792020 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.792069 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.792100 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.792212 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.792251 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.792350 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.849662 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.851609 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.851682 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.851700 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.851744 4834 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.852599 4834 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.45:6443: connect: connection refused" node="crc" Jan 21 14:30:54 crc kubenswrapper[4834]: E0121 14:30:54.869139 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="800ms" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.961104 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.967472 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: I0121 14:30:54.981027 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.991134 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-cdf290e93eb40e86ffa6da24494aad8c27a19656309b6d7744c0061253cce404 WatchSource:0}: Error finding container cdf290e93eb40e86ffa6da24494aad8c27a19656309b6d7744c0061253cce404: Status 404 returned error can't find the container with id cdf290e93eb40e86ffa6da24494aad8c27a19656309b6d7744c0061253cce404 Jan 21 14:30:54 crc kubenswrapper[4834]: W0121 14:30:54.993632 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4261ffbb764969435ed35654a06caf1804b635e8935a0c277a3bd83ab6a87388 WatchSource:0}: Error finding container 4261ffbb764969435ed35654a06caf1804b635e8935a0c277a3bd83ab6a87388: Status 404 returned error can't find the container with id 4261ffbb764969435ed35654a06caf1804b635e8935a0c277a3bd83ab6a87388 Jan 21 14:30:55 crc kubenswrapper[4834]: W0121 14:30:55.001068 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f81fffd95f2bae7b45309fb8b9b4661b7daec79c3c5ff2a2667a8ceb4d166760 WatchSource:0}: Error finding container f81fffd95f2bae7b45309fb8b9b4661b7daec79c3c5ff2a2667a8ceb4d166760: Status 404 returned error can't find the container with id f81fffd95f2bae7b45309fb8b9b4661b7daec79c3c5ff2a2667a8ceb4d166760 Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.004079 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.005207 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:55 crc kubenswrapper[4834]: W0121 14:30:55.108547 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-eba7dd9e35f94f98a9e900a54839c411a0e148a2aae5865c5b9ee6e0190295ac WatchSource:0}: Error finding container eba7dd9e35f94f98a9e900a54839c411a0e148a2aae5865c5b9ee6e0190295ac: Status 404 returned error can't find the container with id eba7dd9e35f94f98a9e900a54839c411a0e148a2aae5865c5b9ee6e0190295ac Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.253523 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.257355 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.257430 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.257450 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.257490 4834 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:30:55 crc kubenswrapper[4834]: E0121 14:30:55.258281 4834 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.45:6443: connect: connection refused" node="crc" Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.263990 4834 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.264966 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:37:15.227021307 +0000 UTC Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.332796 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4261ffbb764969435ed35654a06caf1804b635e8935a0c277a3bd83ab6a87388"} Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.334213 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eba7dd9e35f94f98a9e900a54839c411a0e148a2aae5865c5b9ee6e0190295ac"} Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.335124 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"414bf56193f3b5e9a707335c6f6bf5bcde744304c833f69385bfb28252c4e6a9"} Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.336458 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f81fffd95f2bae7b45309fb8b9b4661b7daec79c3c5ff2a2667a8ceb4d166760"} Jan 21 14:30:55 crc kubenswrapper[4834]: I0121 14:30:55.337879 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cdf290e93eb40e86ffa6da24494aad8c27a19656309b6d7744c0061253cce404"} Jan 21 14:30:55 crc kubenswrapper[4834]: W0121 14:30:55.592247 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:55 crc kubenswrapper[4834]: E0121 14:30:55.592350 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:55 crc kubenswrapper[4834]: W0121 14:30:55.592343 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:55 crc kubenswrapper[4834]: E0121 14:30:55.592453 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:55 crc kubenswrapper[4834]: E0121 14:30:55.670641 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="1.6s" Jan 21 14:30:55 crc kubenswrapper[4834]: W0121 14:30:55.783284 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:55 crc kubenswrapper[4834]: E0121 14:30:55.783411 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:55 crc kubenswrapper[4834]: W0121 14:30:55.860397 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:55 crc kubenswrapper[4834]: E0121 14:30:55.860498 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.058742 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.060356 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.060417 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.060428 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.060463 4834 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:30:56 crc kubenswrapper[4834]: E0121 14:30:56.061079 4834 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.45:6443: connect: connection refused" node="crc" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.263973 4834 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.265167 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:29:18.898126301 +0000 UTC Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.281064 4834 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 14:30:56 crc kubenswrapper[4834]: E0121 14:30:56.282645 4834 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.45:6443: connect: connection refused" logger="UnhandledError" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.342676 4834 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1b4aea0ba62e2303c60b7d7bdd51bd1223308834926d14a366bb741879a8d9f4" exitCode=0 Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.342769 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1b4aea0ba62e2303c60b7d7bdd51bd1223308834926d14a366bb741879a8d9f4"} Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.342838 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.343918 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.344007 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.344026 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.344360 4834 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7" exitCode=0 Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.344439 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7"} Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.344566 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.346340 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.346373 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.346384 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.347698 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79"} Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.347755 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2"} Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.347769 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871"} Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.349503 4834 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189" exitCode=0 Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.349594 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189"} Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.349634 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.351421 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.351448 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.351460 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.353280 4834 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59" exitCode=0 Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.353353 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59"} Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.353365 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.354162 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.358175 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.358212 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.358224 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.358412 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.358463 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:56 crc kubenswrapper[4834]: I0121 14:30:56.358477 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.264312 4834 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.45:6443: connect: connection refused Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.265235 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:52:51.580989318 +0000 UTC Jan 21 14:30:57 crc kubenswrapper[4834]: E0121 14:30:57.272029 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="3.2s" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.358899 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2"} Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.359004 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454"} Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.362626 4834 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce" exitCode=0 Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.362742 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce"} Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.362796 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.366345 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.366401 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.366421 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.367118 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d79d6d1f0b1be80b358d624746e6afaf9b8d13e4b7e75268f72ab35ae062967a"} Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.367313 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.368660 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.368702 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.368722 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.374180 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429"} Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.374236 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf"} Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.378735 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19"} Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.378876 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.379968 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.380002 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.380014 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.661783 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.663448 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.663490 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.663504 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:57 crc kubenswrapper[4834]: I0121 14:30:57.663536 4834 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.265852 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:51:57.452695476 +0000 UTC Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.387906 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a"} Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.388085 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7"} Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.388125 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7"} Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.388126 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.390252 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.390299 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.390319 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.391654 4834 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9" exitCode=0 Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.391758 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9"} Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.391977 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.393682 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.393720 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.393738 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.397667 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750"} Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.397749 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.397861 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.397897 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.400016 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.400656 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.400676 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.400793 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.400838 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.400853 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.402496 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.402653 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:58 crc kubenswrapper[4834]: I0121 14:30:58.402827 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.065152 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.266838 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:16:01.582228086 +0000 UTC Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.403667 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.403740 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.404175 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94"} Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.404276 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.404350 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.404907 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.404967 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.404980 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.405626 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.405683 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:30:59 crc kubenswrapper[4834]: I0121 14:30:59.405702 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.267993 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:51:19.109175683 +0000 UTC Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.409836 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.409892 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.410475 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a"} Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.410524 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636"} Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.410791 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454"} Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.410806 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803"} Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.417950 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.418013 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.418028 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.439356 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:31:00 crc kubenswrapper[4834]: I0121 14:31:00.556347 4834 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.203868 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.204108 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.205535 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.205584 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.205595 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.268965 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 21:43:52.62688606 +0000 UTC Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.412443 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.413071 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.413137 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.413877 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.413910 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.413920 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.414489 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.414525 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:01 crc kubenswrapper[4834]: I0121 14:31:01.414538 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.269129 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:00:34.48608177 +0000 UTC Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.577154 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.577407 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.578659 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.578707 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.578725 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.817175 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.817343 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.818645 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.818692 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:02 crc kubenswrapper[4834]: I0121 14:31:02.818702 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:03 crc kubenswrapper[4834]: I0121 14:31:03.269525 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:22:06.906069873 +0000 UTC Jan 21 14:31:03 crc kubenswrapper[4834]: I0121 14:31:03.433534 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:31:03 crc kubenswrapper[4834]: I0121 14:31:03.433798 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:03 crc kubenswrapper[4834]: I0121 14:31:03.434952 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:03 crc kubenswrapper[4834]: I0121 14:31:03.434999 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:03 crc kubenswrapper[4834]: I0121 14:31:03.435018 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.021799 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.022018 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.023169 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.023198 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.023207 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.133194 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.138615 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.269919 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:22:17.378795907 +0000 UTC Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.421903 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.423030 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.423067 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.423079 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.546991 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.547186 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.548191 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.548297 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:04 crc kubenswrapper[4834]: I0121 14:31:04.548385 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:04 crc kubenswrapper[4834]: E0121 14:31:04.555515 4834 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.270346 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:34:06.734540056 +0000 UTC Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.381612 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.382250 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.384339 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.384411 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.384426 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.424534 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.426839 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.426909 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.426972 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.430677 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.818074 4834 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:31:05 crc kubenswrapper[4834]: I0121 14:31:05.818203 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:31:06 crc kubenswrapper[4834]: I0121 14:31:06.270828 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:13:38.008633687 +0000 UTC Jan 21 14:31:06 crc kubenswrapper[4834]: I0121 14:31:06.427382 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:06 crc kubenswrapper[4834]: I0121 14:31:06.428830 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:06 crc kubenswrapper[4834]: I0121 14:31:06.428911 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:06 crc kubenswrapper[4834]: I0121 14:31:06.428999 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:07 crc kubenswrapper[4834]: I0121 14:31:07.273080 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:25:18.980671397 +0000 UTC Jan 21 14:31:07 crc kubenswrapper[4834]: W0121 14:31:07.465102 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 14:31:07 crc kubenswrapper[4834]: I0121 14:31:07.465276 4834 trace.go:236] Trace[1627534104]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:30:57.462) (total time: 10002ms): Jan 21 14:31:07 crc kubenswrapper[4834]: Trace[1627534104]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:31:07.465) Jan 21 14:31:07 crc kubenswrapper[4834]: Trace[1627534104]: [10.002546841s] [10.002546841s] END Jan 21 14:31:07 crc kubenswrapper[4834]: E0121 14:31:07.465322 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 14:31:07 crc kubenswrapper[4834]: E0121 14:31:07.664505 4834 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 14:31:08 crc kubenswrapper[4834]: W0121 14:31:08.012055 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 14:31:08 crc kubenswrapper[4834]: I0121 14:31:08.012182 4834 trace.go:236] Trace[2030161979]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:30:58.004) (total time: 10007ms): Jan 21 14:31:08 crc kubenswrapper[4834]: Trace[2030161979]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10007ms (14:31:08.012) Jan 21 14:31:08 crc kubenswrapper[4834]: Trace[2030161979]: [10.007931902s] [10.007931902s] END Jan 21 14:31:08 crc kubenswrapper[4834]: E0121 14:31:08.012213 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 14:31:08 crc kubenswrapper[4834]: I0121 14:31:08.264538 4834 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 14:31:08 crc kubenswrapper[4834]: I0121 14:31:08.274897 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:40:10.387172769 +0000 UTC Jan 21 14:31:08 crc kubenswrapper[4834]: W0121 14:31:08.307370 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 14:31:08 crc kubenswrapper[4834]: I0121 14:31:08.307531 4834 trace.go:236] Trace[1974790986]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:30:58.305) (total time: 10002ms): Jan 21 14:31:08 crc kubenswrapper[4834]: Trace[1974790986]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:31:08.307) Jan 21 14:31:08 crc kubenswrapper[4834]: Trace[1974790986]: [10.002250656s] [10.002250656s] END Jan 21 14:31:08 crc kubenswrapper[4834]: E0121 14:31:08.307571 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 14:31:08 crc kubenswrapper[4834]: W0121 14:31:08.625781 4834 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 14:31:08 crc kubenswrapper[4834]: I0121 14:31:08.625956 4834 trace.go:236] Trace[1729538210]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 14:30:58.624) (total time: 10001ms): Jan 21 14:31:08 crc kubenswrapper[4834]: Trace[1729538210]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:31:08.625) Jan 21 14:31:08 crc kubenswrapper[4834]: Trace[1729538210]: [10.001575764s] [10.001575764s] END Jan 21 14:31:08 crc kubenswrapper[4834]: E0121 14:31:08.625998 4834 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 14:31:08 crc kubenswrapper[4834]: I0121 14:31:08.973919 4834 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 14:31:08 crc kubenswrapper[4834]: I0121 14:31:08.974010 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 14:31:08 crc kubenswrapper[4834]: I0121 14:31:08.979846 4834 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 14:31:08 crc kubenswrapper[4834]: I0121 14:31:08.979918 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 14:31:09 crc kubenswrapper[4834]: I0121 14:31:09.071202 4834 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]log ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]etcd ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/priority-and-fairness-filter ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/start-apiextensions-informers ok Jan 21 14:31:09 crc kubenswrapper[4834]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 21 14:31:09 crc kubenswrapper[4834]: [-]poststarthook/crd-informer-synced failed: reason withheld Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/start-system-namespaces-controller ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 21 14:31:09 crc kubenswrapper[4834]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 21 14:31:09 crc kubenswrapper[4834]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/bootstrap-controller ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/start-kube-aggregator-informers ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 21 14:31:09 crc kubenswrapper[4834]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 21 14:31:09 crc kubenswrapper[4834]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]autoregister-completion ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/apiservice-openapi-controller ok Jan 21 14:31:09 crc kubenswrapper[4834]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 21 14:31:09 crc kubenswrapper[4834]: livez check failed Jan 21 14:31:09 crc kubenswrapper[4834]: I0121 14:31:09.071276 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:31:09 crc kubenswrapper[4834]: I0121 14:31:09.275908 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:54:13.733214298 +0000 UTC Jan 21 14:31:10 crc kubenswrapper[4834]: I0121 14:31:10.276882 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:06:42.455613888 +0000 UTC Jan 21 14:31:10 crc kubenswrapper[4834]: I0121 14:31:10.864883 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:10 crc kubenswrapper[4834]: I0121 14:31:10.867077 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:10 crc kubenswrapper[4834]: I0121 14:31:10.867148 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:10 crc kubenswrapper[4834]: I0121 14:31:10.867173 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:10 crc kubenswrapper[4834]: I0121 14:31:10.867223 4834 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:31:10 crc kubenswrapper[4834]: E0121 14:31:10.872503 4834 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 21 14:31:11 crc kubenswrapper[4834]: I0121 14:31:11.277200 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:17:17.28655561 +0000 UTC Jan 21 14:31:11 crc kubenswrapper[4834]: I0121 14:31:11.495067 4834 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 14:31:12 crc kubenswrapper[4834]: I0121 14:31:12.278084 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:52:50.480110309 +0000 UTC Jan 21 14:31:12 crc kubenswrapper[4834]: I0121 14:31:12.452333 4834 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 14:31:12 crc kubenswrapper[4834]: I0121 14:31:12.555622 4834 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 14:31:12 crc kubenswrapper[4834]: I0121 14:31:12.615743 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 14:31:12 crc kubenswrapper[4834]: I0121 14:31:12.643601 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.265523 4834 apiserver.go:52] "Watching apiserver" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.269134 4834 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.269517 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.270002 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.270029 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:31:13 crc kubenswrapper[4834]: E0121 14:31:13.270100 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.270486 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.270546 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.271328 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:13 crc kubenswrapper[4834]: E0121 14:31:13.271426 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.271524 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.271570 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:13 crc kubenswrapper[4834]: E0121 14:31:13.271723 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.272259 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.272716 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.272724 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.272886 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.273056 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.273210 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.273257 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.273601 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.278317 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:11:40.791601035 +0000 UTC Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.291615 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.303569 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.313919 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.323544 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.331455 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.340483 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.358998 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.365050 4834 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.368696 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:13 crc kubenswrapper[4834]: I0121 14:31:13.379706 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:13 crc kubenswrapper[4834]: E0121 14:31:13.457298 4834 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:13.974974 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:13.978511 4834 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:13.982871 4834 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.071641 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.075843 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079014 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079044 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079067 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079087 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079106 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079123 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079138 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079280 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079302 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079316 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079335 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079350 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079368 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079386 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079400 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079595 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079613 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079632 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079652 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079682 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079708 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079749 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079740 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079769 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079861 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079889 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079960 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.079986 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080006 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080028 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080049 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080073 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080099 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080121 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080142 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080161 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080191 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080215 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080248 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080287 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080310 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080335 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080361 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080406 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080443 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080467 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080492 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080517 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080543 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080567 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080591 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080615 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080639 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080684 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080708 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080735 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080760 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080797 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080829 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080853 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080877 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080898 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080920 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080984 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081009 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081031 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081052 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081074 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081095 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081121 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081181 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081203 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081231 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081257 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081287 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081324 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081349 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081371 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081395 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081418 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081443 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081470 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081496 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081542 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081566 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081591 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081616 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081643 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081669 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081693 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081724 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081751 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081778 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081804 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081826 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081802 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081849 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081876 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081902 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081944 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081970 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081993 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082020 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082045 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082070 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082097 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082128 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082154 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082181 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082211 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082240 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082267 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082583 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080250 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080334 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080452 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080628 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.080648 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081191 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081414 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081545 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.081858 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082007 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082187 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.082328 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:31:14.582268103 +0000 UTC m=+20.556617378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099646 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099677 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099693 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099739 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099822 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099815 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099910 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099989 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100030 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100127 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100156 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100186 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100215 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100248 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100255 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100281 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100314 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100347 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100381 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100415 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100445 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100478 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100509 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100536 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100546 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082514 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082581 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082641 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082862 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082878 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.083145 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.083417 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.083557 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.083646 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.083747 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.083808 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084028 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084240 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084265 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084293 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084333 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084464 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084517 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084567 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084678 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.084947 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.085032 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.085065 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.085251 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.085307 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.085471 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.085633 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.085809 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.085856 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.085988 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.086187 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.086296 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.086566 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.087100 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.087664 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.088111 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.088440 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.089004 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.090512 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.090672 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.091006 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.091019 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.091382 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.092122 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.092298 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.092456 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.092588 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.093385 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.093810 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.094123 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.094419 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.094545 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.095004 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.095363 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.095533 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.095714 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.095944 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.096238 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.101352 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.096446 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.101397 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.096659 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.096880 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.101508 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.097080 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.097522 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.098436 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.098813 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.098911 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099354 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.099392 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100901 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100943 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.101745 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.101769 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.101022 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.101801 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.101824 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.101917 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.102117 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.102163 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.102381 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.102604 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.102829 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.102990 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.103012 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.103172 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.082339 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.100547 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.103978 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104053 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104113 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104155 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104190 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104234 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104271 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104304 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104337 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104374 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104426 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104498 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104552 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104598 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104643 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104704 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104752 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104813 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.104906 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105004 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105072 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105121 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105175 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105223 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105270 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105327 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105375 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105429 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105486 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105535 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105585 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105637 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105691 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105742 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105790 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105850 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105904 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105993 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.106723 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.106794 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.106834 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.106876 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.105607 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113729 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113820 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113868 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113905 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113981 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114015 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114057 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114100 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114139 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114173 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114207 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114240 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114289 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114339 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114387 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114423 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114465 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114523 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114560 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114610 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114647 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114683 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114716 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114795 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.117140 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.117287 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.117394 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.117489 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.117584 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.117668 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.117706 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.117784 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.117882 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.118083 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.118297 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.118380 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.118543 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.118845 4834 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.118907 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.118975 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.118997 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119018 4834 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119075 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119110 4834 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119175 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119196 4834 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119216 4834 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119357 4834 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119378 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119399 4834 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119476 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119552 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119576 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119597 4834 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119660 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119683 4834 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119752 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.119879 4834 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120007 4834 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120071 4834 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120108 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120170 4834 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120190 4834 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120275 4834 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120298 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120373 4834 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120395 4834 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120469 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120547 4834 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120569 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120591 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120654 4834 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120675 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120734 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120755 4834 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120775 4834 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120835 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120855 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120907 4834 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121023 4834 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121110 4834 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121191 4834 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121224 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121321 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121425 4834 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121527 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121644 4834 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121753 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121789 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121821 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121842 4834 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121901 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121959 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.121982 4834 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122003 4834 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122023 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122043 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122064 4834 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122084 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122147 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122169 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122204 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122227 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122248 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122269 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122297 4834 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122318 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122338 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122360 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122381 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122411 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122433 4834 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122461 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122535 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122565 4834 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122585 4834 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122606 4834 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122629 4834 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122650 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122681 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122710 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122755 4834 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122785 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122805 4834 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122826 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122848 4834 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122869 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122901 4834 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.122968 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123000 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123026 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123049 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123069 4834 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123090 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123110 4834 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123132 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123152 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123187 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123208 4834 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123295 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123319 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.123339 4834 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.133626 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.136483 4834 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.106031 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.106250 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.106979 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.107195 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.107647 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.108112 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.108867 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.109170 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.109378 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.109495 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.109521 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.109600 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.110103 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.111479 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.111489 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.111923 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.112511 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.112693 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.112720 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.112895 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.112906 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113213 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113226 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113426 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113700 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.113894 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114138 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114476 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.114819 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.115367 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.115920 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.116179 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.118364 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.120151 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.123544 4834 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.138570 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:14.638544974 +0000 UTC m=+20.612894019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.129066 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.129780 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.130040 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.130180 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.130255 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.131187 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.131429 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.131786 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.133323 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.136576 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.136984 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.137080 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.137162 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.137352 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.137908 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.138009 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.138007 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.140465 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.140824 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.140867 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.141216 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.141436 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.141786 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.141808 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.142367 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.144069 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.145652 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.145708 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.146555 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.148234 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.148573 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.148708 4834 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.148773 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:14.648752312 +0000 UTC m=+20.623101357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.149306 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.150585 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.152588 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.152707 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.156309 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.160671 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.161856 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.161959 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.162718 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.165849 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.166219 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.166446 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.166997 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.172814 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.176197 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.176233 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.176248 4834 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.176319 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:14.676299263 +0000 UTC m=+20.650648308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.177367 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.177774 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.182459 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.182515 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.182554 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.182647 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.191061 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.192161 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.193383 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.193412 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.193428 4834 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.193490 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:14.693470681 +0000 UTC m=+20.667819726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.193797 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.194023 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.194546 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.196683 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.198504 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.199231 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.203392 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.204017 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.204330 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.207765 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: W0121 14:31:14.220407 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-90c33fe871d2b4d548f61df914e393b07596a77a74af1786aebcc2a754c13144 WatchSource:0}: Error finding container 90c33fe871d2b4d548f61df914e393b07596a77a74af1786aebcc2a754c13144: Status 404 returned error can't find the container with id 90c33fe871d2b4d548f61df914e393b07596a77a74af1786aebcc2a754c13144 Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.220846 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224283 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224332 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224369 4834 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224382 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224406 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224420 4834 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224434 4834 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224444 4834 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224454 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224466 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224476 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224485 4834 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224494 4834 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224502 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224511 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224521 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224534 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.224544 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.229383 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.244841 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.249367 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250028 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250085 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250099 4834 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250131 4834 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250142 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250153 4834 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250163 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250181 4834 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250192 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250202 4834 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250214 4834 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250228 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250239 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250249 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250260 4834 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250276 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250286 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250297 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250313 4834 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250324 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250336 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250345 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250358 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250369 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250380 4834 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250389 4834 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250402 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250412 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250423 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250435 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250446 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250456 4834 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250486 4834 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250499 4834 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250510 4834 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250522 4834 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250532 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250546 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250558 4834 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250568 4834 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250579 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250592 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250603 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250614 4834 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250627 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250636 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250647 4834 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250657 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250669 4834 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250678 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250688 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250698 4834 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250712 4834 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250724 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250737 4834 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250754 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250767 4834 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250778 4834 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250790 4834 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250802 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250812 4834 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250822 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250831 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250844 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250853 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250869 4834 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250878 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.250891 4834 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.252242 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.232461 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.254506 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.262803 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.270081 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.279330 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:04:58.335425141 +0000 UTC Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.284602 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.305286 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.321961 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.332016 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.332859 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.332884 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.334868 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.335762 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.337074 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.337643 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.338357 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.339596 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.340466 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.341638 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.342277 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.343012 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.343575 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.344143 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.344706 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.345655 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.346238 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.347193 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.347706 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.348386 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.351115 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.351624 4834 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.351654 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.351665 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.351675 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.351685 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.351858 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.353161 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.353769 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.355607 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.361189 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.361723 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.362505 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.364086 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.364696 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.364985 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.367642 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.368152 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.368608 4834 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.368709 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.370006 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.370478 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.370887 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.374382 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.375038 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.375715 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.375987 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.376792 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.377866 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.378369 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.378994 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.380140 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.381070 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.381512 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.382426 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.382910 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.384449 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.384893 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.385667 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.386113 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.386246 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.386608 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.387552 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.388075 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.396303 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.406349 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.416199 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.434525 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.448006 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.455482 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"90c33fe871d2b4d548f61df914e393b07596a77a74af1786aebcc2a754c13144"} Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.459287 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.461494 4834 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.478433 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.482842 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.488479 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.491252 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.501239 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.526262 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.655543 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.655632 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.655665 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.655820 4834 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.655878 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:15.655861544 +0000 UTC m=+21.630210589 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.655960 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:31:15.655951937 +0000 UTC m=+21.630300982 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.655997 4834 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.656021 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:15.656014829 +0000 UTC m=+21.630363864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.722243 4834 csr.go:261] certificate signing request csr-htsz7 is approved, waiting to be issued Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.756612 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.756669 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.756814 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.756832 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.756845 4834 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.756897 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:15.756881525 +0000 UTC m=+21.731230570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.756993 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.757004 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.757011 4834 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:14 crc kubenswrapper[4834]: E0121 14:31:14.757042 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:15.75703592 +0000 UTC m=+21.731384965 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:14 crc kubenswrapper[4834]: I0121 14:31:14.845969 4834 csr.go:257] certificate signing request csr-htsz7 is issued Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.280078 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:45:36.714178638 +0000 UTC Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.323593 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.323691 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.323750 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.323857 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.323937 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.323991 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.458396 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a"} Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.458474 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd"} Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.459380 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e1d9f99d1bd1125af894a70be8078f47539d95c1f8d4fef1ed0653827518f1a5"} Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.460362 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b"} Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.460400 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1a6d0a2a14897b28bc91fe74986eb3eec2633d5014cf957c98cf3951cede64bb"} Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.504070 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.545091 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.571497 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.620472 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.646811 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.664409 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.664528 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.664570 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.664632 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:31:17.664593195 +0000 UTC m=+23.638942240 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.664662 4834 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.664685 4834 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.664734 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:17.664716329 +0000 UTC m=+23.639065564 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.664752 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:17.66474517 +0000 UTC m=+23.639094455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.665052 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.689164 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.704201 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.715818 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.726632 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-86g84"] Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.727096 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.729146 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8stvm"] Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.729567 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8stvm" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.730598 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.734747 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.734980 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.735202 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.735317 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.735473 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.735585 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.736275 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.754157 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.765252 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.765378 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.765470 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.765496 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.765510 4834 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.765540 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.765560 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:17.765543613 +0000 UTC m=+23.739892658 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.765564 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.765583 4834 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:15 crc kubenswrapper[4834]: E0121 14:31:15.765643 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:17.765624236 +0000 UTC m=+23.739973491 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.771086 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.794391 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.816835 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.830446 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.844782 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.846881 4834 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 14:26:14 +0000 UTC, rotation deadline is 2026-12-13 14:15:17.117736724 +0000 UTC Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.846915 4834 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7823h44m1.270825699s for next certificate rotation Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.856128 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.865724 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4b9d51eb-93f7-4c89-8c91-258f908c766d-proxy-tls\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.865764 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/868b7692-0771-42e1-8bfc-1882f6204823-hosts-file\") pod \"node-resolver-8stvm\" (UID: \"868b7692-0771-42e1-8bfc-1882f6204823\") " pod="openshift-dns/node-resolver-8stvm" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.865784 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b9d51eb-93f7-4c89-8c91-258f908c766d-mcd-auth-proxy-config\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.865861 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jbh9\" (UniqueName: \"kubernetes.io/projected/868b7692-0771-42e1-8bfc-1882f6204823-kube-api-access-6jbh9\") pod \"node-resolver-8stvm\" (UID: \"868b7692-0771-42e1-8bfc-1882f6204823\") " pod="openshift-dns/node-resolver-8stvm" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.865885 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4b9d51eb-93f7-4c89-8c91-258f908c766d-rootfs\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.865910 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5snc\" (UniqueName: \"kubernetes.io/projected/4b9d51eb-93f7-4c89-8c91-258f908c766d-kube-api-access-x5snc\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.871482 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.888844 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.912033 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.923544 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.934217 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.947277 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.961313 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.967079 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4b9d51eb-93f7-4c89-8c91-258f908c766d-proxy-tls\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.967114 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/868b7692-0771-42e1-8bfc-1882f6204823-hosts-file\") pod \"node-resolver-8stvm\" (UID: \"868b7692-0771-42e1-8bfc-1882f6204823\") " pod="openshift-dns/node-resolver-8stvm" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.967134 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b9d51eb-93f7-4c89-8c91-258f908c766d-mcd-auth-proxy-config\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.967157 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jbh9\" (UniqueName: \"kubernetes.io/projected/868b7692-0771-42e1-8bfc-1882f6204823-kube-api-access-6jbh9\") pod \"node-resolver-8stvm\" (UID: \"868b7692-0771-42e1-8bfc-1882f6204823\") " pod="openshift-dns/node-resolver-8stvm" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.967175 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4b9d51eb-93f7-4c89-8c91-258f908c766d-rootfs\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.967191 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5snc\" (UniqueName: \"kubernetes.io/projected/4b9d51eb-93f7-4c89-8c91-258f908c766d-kube-api-access-x5snc\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.968016 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/868b7692-0771-42e1-8bfc-1882f6204823-hosts-file\") pod \"node-resolver-8stvm\" (UID: \"868b7692-0771-42e1-8bfc-1882f6204823\") " pod="openshift-dns/node-resolver-8stvm" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.968323 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4b9d51eb-93f7-4c89-8c91-258f908c766d-rootfs\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.969028 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b9d51eb-93f7-4c89-8c91-258f908c766d-mcd-auth-proxy-config\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.972291 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.984161 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.994349 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5snc\" (UniqueName: \"kubernetes.io/projected/4b9d51eb-93f7-4c89-8c91-258f908c766d-kube-api-access-x5snc\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.995116 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4b9d51eb-93f7-4c89-8c91-258f908c766d-proxy-tls\") pod \"machine-config-daemon-86g84\" (UID: \"4b9d51eb-93f7-4c89-8c91-258f908c766d\") " pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:15 crc kubenswrapper[4834]: I0121 14:31:15.995647 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jbh9\" (UniqueName: \"kubernetes.io/projected/868b7692-0771-42e1-8bfc-1882f6204823-kube-api-access-6jbh9\") pod \"node-resolver-8stvm\" (UID: \"868b7692-0771-42e1-8bfc-1882f6204823\") " pod="openshift-dns/node-resolver-8stvm" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.014988 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.041132 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.048512 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8stvm" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.056400 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.072958 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.088660 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.094541 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-66jlt"] Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.095175 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gd9jh"] Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.095350 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.095655 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.108847 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.109401 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.109567 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.109689 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.109858 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.110064 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.110182 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.122578 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.143379 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.159041 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.181277 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.203863 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.219024 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.241482 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.255798 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269207 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-system-cni-dir\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269251 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhj4\" (UniqueName: \"kubernetes.io/projected/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-kube-api-access-7xhj4\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269287 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a8efbee9-2d1d-473f-ad38-b10d84821e23-cni-binary-copy\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269306 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-os-release\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269319 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-cni-binary-copy\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269363 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-run-netns\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269379 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-etc-kubernetes\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269398 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-system-cni-dir\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269418 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-os-release\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269435 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a8efbee9-2d1d-473f-ad38-b10d84821e23-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269452 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-var-lib-kubelet\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269478 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbtm7\" (UniqueName: \"kubernetes.io/projected/a8efbee9-2d1d-473f-ad38-b10d84821e23-kube-api-access-mbtm7\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269496 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-var-lib-cni-bin\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269514 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-hostroot\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269535 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-cnibin\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269567 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-tuning-conf-dir\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269585 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-run-k8s-cni-cncf-io\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269609 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-var-lib-cni-multus\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269625 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-daemon-config\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269662 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-cnibin\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269679 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-socket-dir-parent\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269695 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-conf-dir\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269721 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-cni-dir\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.269737 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-run-multus-certs\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.270984 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.280840 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:11:40.661598491 +0000 UTC Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.289775 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.307668 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.327261 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.341314 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.358295 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.370991 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-os-release\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371046 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-cni-binary-copy\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371074 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-run-netns\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371101 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-etc-kubernetes\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371127 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a8efbee9-2d1d-473f-ad38-b10d84821e23-cni-binary-copy\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371151 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-system-cni-dir\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371193 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-os-release\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371215 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a8efbee9-2d1d-473f-ad38-b10d84821e23-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371214 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-etc-kubernetes\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371239 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-var-lib-kubelet\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371305 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-system-cni-dir\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371330 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-os-release\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371348 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbtm7\" (UniqueName: \"kubernetes.io/projected/a8efbee9-2d1d-473f-ad38-b10d84821e23-kube-api-access-mbtm7\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371356 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-var-lib-kubelet\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371382 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-var-lib-cni-bin\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371289 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-os-release\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371431 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-hostroot\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371452 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-var-lib-cni-bin\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371462 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-cnibin\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371485 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-run-k8s-cni-cncf-io\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371494 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-hostroot\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371505 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-tuning-conf-dir\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371525 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-daemon-config\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371528 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-cnibin\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371542 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-var-lib-cni-multus\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371549 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-run-k8s-cni-cncf-io\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371600 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-var-lib-cni-multus\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371564 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-cnibin\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371641 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-socket-dir-parent\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371659 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-conf-dir\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371695 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-cni-dir\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371717 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-run-multus-certs\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371742 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-system-cni-dir\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371745 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-socket-dir-parent\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371753 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-conf-dir\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371619 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-cnibin\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371759 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhj4\" (UniqueName: \"kubernetes.io/projected/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-kube-api-access-7xhj4\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371805 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-run-multus-certs\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.371990 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-system-cni-dir\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.372049 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a8efbee9-2d1d-473f-ad38-b10d84821e23-cni-binary-copy\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.372087 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-cni-dir\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.372107 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-host-run-netns\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.372204 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a8efbee9-2d1d-473f-ad38-b10d84821e23-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.372272 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-cni-binary-copy\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.372363 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a8efbee9-2d1d-473f-ad38-b10d84821e23-tuning-conf-dir\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.372462 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-multus-daemon-config\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.373051 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.383323 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.390631 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhj4\" (UniqueName: \"kubernetes.io/projected/dbe1b4f9-f835-43ba-9496-a9e60af3b87f-kube-api-access-7xhj4\") pod \"multus-gd9jh\" (UID: \"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\") " pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.395820 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbtm7\" (UniqueName: \"kubernetes.io/projected/a8efbee9-2d1d-473f-ad38-b10d84821e23-kube-api-access-mbtm7\") pod \"multus-additional-cni-plugins-66jlt\" (UID: \"a8efbee9-2d1d-473f-ad38-b10d84821e23\") " pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.404444 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.419596 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.434467 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gd9jh" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.436628 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.453005 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-66jlt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.464360 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8stvm" event={"ID":"868b7692-0771-42e1-8bfc-1882f6204823","Type":"ContainerStarted","Data":"385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802"} Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.464420 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8stvm" event={"ID":"868b7692-0771-42e1-8bfc-1882f6204823","Type":"ContainerStarted","Data":"a7a6f888a620cb02d5caec386e53cfa3e18c66a21d19328c98b38e5e97de9c35"} Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.466064 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gd9jh" event={"ID":"dbe1b4f9-f835-43ba-9496-a9e60af3b87f","Type":"ContainerStarted","Data":"11681e593cded6b641321d6ff88eaacb37414ce9d96e98f5e1162ba439bee801"} Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.467824 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d"} Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.467855 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870"} Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.467869 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"e56e54067c5252c9166884a6a5171cd85979dfed36d4af73ffa38e9823d28d37"} Jan 21 14:31:16 crc kubenswrapper[4834]: W0121 14:31:16.469263 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8efbee9_2d1d_473f_ad38_b10d84821e23.slice/crio-eaf0232920f88355c0ec396958cae834e122cd8ff2f8cd5b197235ad4b6d9b7c WatchSource:0}: Error finding container eaf0232920f88355c0ec396958cae834e122cd8ff2f8cd5b197235ad4b6d9b7c: Status 404 returned error can't find the container with id eaf0232920f88355c0ec396958cae834e122cd8ff2f8cd5b197235ad4b6d9b7c Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.470655 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.479014 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6qwpj"] Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.484067 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.489912 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.490118 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.490220 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.490516 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.490526 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.490705 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.490831 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.505108 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.550242 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574390 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-kubelet\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574441 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-systemd\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574462 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-bin\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574509 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmwbq\" (UniqueName: \"kubernetes.io/projected/0b3931d0-e57b-457f-94da-b56c92b40090-kube-api-access-pmwbq\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574532 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-log-socket\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574551 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b3931d0-e57b-457f-94da-b56c92b40090-ovn-node-metrics-cert\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574680 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-config\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574732 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574814 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-ovn\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574858 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-env-overrides\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574883 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-systemd-units\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574921 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-etc-openvswitch\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.574981 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-var-lib-openvswitch\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.575033 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-netd\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.575060 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.575121 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-node-log\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.575193 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-slash\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.575220 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-script-lib\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.575270 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-openvswitch\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.575297 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-netns\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.576091 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.606778 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.666009 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676632 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-slash\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676680 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-script-lib\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676704 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-openvswitch\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676723 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-netns\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676744 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-kubelet\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676760 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-systemd\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676777 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-bin\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676812 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmwbq\" (UniqueName: \"kubernetes.io/projected/0b3931d0-e57b-457f-94da-b56c92b40090-kube-api-access-pmwbq\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676829 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-log-socket\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676846 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b3931d0-e57b-457f-94da-b56c92b40090-ovn-node-metrics-cert\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676868 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-config\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676886 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676904 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-ovn\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676903 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-slash\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676997 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-openvswitch\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677039 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-netns\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677065 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-kubelet\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677088 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-systemd\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677109 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-bin\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.676918 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-env-overrides\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677299 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-systemd-units\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677335 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-etc-openvswitch\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677353 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-var-lib-openvswitch\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677374 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-netd\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677391 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-log-socket\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677432 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-node-log\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677527 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-node-log\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677573 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-systemd-units\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677595 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-etc-openvswitch\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677616 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-var-lib-openvswitch\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677638 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-netd\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677662 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677690 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677690 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-env-overrides\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.677739 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-ovn\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.678150 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-script-lib\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.678328 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-config\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.682529 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b3931d0-e57b-457f-94da-b56c92b40090-ovn-node-metrics-cert\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.711441 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmwbq\" (UniqueName: \"kubernetes.io/projected/0b3931d0-e57b-457f-94da-b56c92b40090-kube-api-access-pmwbq\") pod \"ovnkube-node-6qwpj\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.736335 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.754796 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.783405 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.796749 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.813015 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.826803 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.838295 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.850518 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.866018 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.882232 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.903676 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.926168 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.940554 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.954337 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:16 crc kubenswrapper[4834]: I0121 14:31:16.971387 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.071734 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:17 crc kubenswrapper[4834]: W0121 14:31:17.091830 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b3931d0_e57b_457f_94da_b56c92b40090.slice/crio-58fed71eb7c094fd87fa9ff6ea0db61cf1069f41da5a25249a34f29dae98f7dd WatchSource:0}: Error finding container 58fed71eb7c094fd87fa9ff6ea0db61cf1069f41da5a25249a34f29dae98f7dd: Status 404 returned error can't find the container with id 58fed71eb7c094fd87fa9ff6ea0db61cf1069f41da5a25249a34f29dae98f7dd Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.273026 4834 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.277477 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.277527 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.277543 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.277702 4834 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.281858 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:54:51.83478118 +0000 UTC Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.287230 4834 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.287602 4834 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.288719 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.288739 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.288750 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.288764 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.288778 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.312742 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.316375 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.316422 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.316432 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.316450 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.316463 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.327346 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.327554 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.328070 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.328150 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.328207 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.328267 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.330699 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.334750 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.334817 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.334833 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.334857 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.334872 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.346040 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.349942 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.349986 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.349998 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.350018 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.350028 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.361078 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.365300 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.365345 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.365358 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.365377 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.365391 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.376109 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.376228 4834 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.378452 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.378492 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.378521 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.378542 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.378556 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.473461 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9" exitCode=0 Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.473551 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.473590 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"58fed71eb7c094fd87fa9ff6ea0db61cf1069f41da5a25249a34f29dae98f7dd"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.475700 4834 generic.go:334] "Generic (PLEG): container finished" podID="a8efbee9-2d1d-473f-ad38-b10d84821e23" containerID="f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8" exitCode=0 Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.475811 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" event={"ID":"a8efbee9-2d1d-473f-ad38-b10d84821e23","Type":"ContainerDied","Data":"f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.475864 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" event={"ID":"a8efbee9-2d1d-473f-ad38-b10d84821e23","Type":"ContainerStarted","Data":"eaf0232920f88355c0ec396958cae834e122cd8ff2f8cd5b197235ad4b6d9b7c"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.478801 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.480582 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gd9jh" event={"ID":"dbe1b4f9-f835-43ba-9496-a9e60af3b87f","Type":"ContainerStarted","Data":"5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.492222 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.492306 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.492326 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.492357 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.492377 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.493048 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.512607 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.531573 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.546827 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.560309 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.575509 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.596357 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.596704 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.596717 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.596734 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.596750 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.606360 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.619466 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.632820 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.651130 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.668702 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.684165 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.689054 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.689205 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.689243 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.689298 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:31:21.689254706 +0000 UTC m=+27.663603771 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.689353 4834 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.689380 4834 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.689409 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:21.689393331 +0000 UTC m=+27.663742376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.689462 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:21.689438913 +0000 UTC m=+27.663787958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.695864 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.699442 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.699493 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.699515 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.699538 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.699554 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.711877 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.725204 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.746509 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.769293 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.784690 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.790062 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.790143 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.790326 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.790357 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.790384 4834 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.790481 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:21.790453934 +0000 UTC m=+27.764802999 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.790577 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.790645 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.790745 4834 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:17 crc kubenswrapper[4834]: E0121 14:31:17.790860 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:21.790838926 +0000 UTC m=+27.765187971 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.798877 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.802175 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.802328 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.802421 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.802512 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.802591 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.813588 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.828738 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.842661 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.858330 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.871283 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.888921 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.906385 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.906862 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.906900 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.906914 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.906947 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.906960 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:17Z","lastTransitionTime":"2026-01-21T14:31:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.919793 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:17 crc kubenswrapper[4834]: I0121 14:31:17.939187 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.009840 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.009871 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.009879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.009894 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.009903 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.112964 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.113010 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.113022 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.113042 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.113055 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.215187 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.215228 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.215246 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.215263 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.215273 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.282570 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:49:00.674088266 +0000 UTC Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.318088 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.318193 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.318214 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.318239 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.318255 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.421399 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.421432 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.421441 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.421455 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.421463 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.486729 4834 generic.go:334] "Generic (PLEG): container finished" podID="a8efbee9-2d1d-473f-ad38-b10d84821e23" containerID="7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb" exitCode=0 Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.486780 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" event={"ID":"a8efbee9-2d1d-473f-ad38-b10d84821e23","Type":"ContainerDied","Data":"7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.491766 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.491838 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.491857 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.491873 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.491886 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.491901 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.510858 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.523995 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.524044 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.524055 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.524073 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.524085 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.526364 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.544664 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.559618 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.572681 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.602292 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.623568 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.628086 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.628116 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.628126 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.628142 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.628153 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.636578 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.651385 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.664851 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.678318 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.692419 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.704977 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.722676 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.730115 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.730155 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.730166 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.730185 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.730197 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.832726 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.832775 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.832789 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.832806 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.832819 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.935346 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.935445 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.935457 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.935481 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:18 crc kubenswrapper[4834]: I0121 14:31:18.935491 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:18Z","lastTransitionTime":"2026-01-21T14:31:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.038096 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.038150 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.038165 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.038191 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.038209 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.140958 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.141276 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.141368 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.141452 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.141561 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.244635 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.245040 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.245129 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.245217 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.245301 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.283626 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:22:15.35157477 +0000 UTC Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.324264 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.324323 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:19 crc kubenswrapper[4834]: E0121 14:31:19.324770 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.324334 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:19 crc kubenswrapper[4834]: E0121 14:31:19.324957 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:19 crc kubenswrapper[4834]: E0121 14:31:19.325038 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.353625 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.353896 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.354019 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.354113 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.354192 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.457305 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.457353 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.457364 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.457388 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.457401 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.499909 4834 generic.go:334] "Generic (PLEG): container finished" podID="a8efbee9-2d1d-473f-ad38-b10d84821e23" containerID="970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7" exitCode=0 Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.499963 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" event={"ID":"a8efbee9-2d1d-473f-ad38-b10d84821e23","Type":"ContainerDied","Data":"970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.527471 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.550428 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.559886 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.559950 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.559965 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.559984 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.560000 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.565258 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.578107 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.591985 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.613223 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.631136 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.641398 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.654798 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.665985 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.666043 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.666053 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.666075 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.666090 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.673067 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.686767 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.703777 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.718523 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.739183 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.768840 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.768887 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.768897 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.768915 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.768939 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.821268 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jjx4h"] Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.821758 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.824220 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.824897 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.824919 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.824948 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.840367 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.852911 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.868476 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.871350 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.871388 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.871400 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.871418 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.871430 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.906600 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.913948 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b767bc6-35ad-425a-a3b7-09783d972cf0-host\") pod \"node-ca-jjx4h\" (UID: \"3b767bc6-35ad-425a-a3b7-09783d972cf0\") " pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.913999 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8x6\" (UniqueName: \"kubernetes.io/projected/3b767bc6-35ad-425a-a3b7-09783d972cf0-kube-api-access-9p8x6\") pod \"node-ca-jjx4h\" (UID: \"3b767bc6-35ad-425a-a3b7-09783d972cf0\") " pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.914050 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3b767bc6-35ad-425a-a3b7-09783d972cf0-serviceca\") pod \"node-ca-jjx4h\" (UID: \"3b767bc6-35ad-425a-a3b7-09783d972cf0\") " pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.933961 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.969353 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.974449 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.974492 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.974506 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.974526 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.974540 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:19Z","lastTransitionTime":"2026-01-21T14:31:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.982501 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:19 crc kubenswrapper[4834]: I0121 14:31:19.996505 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:19Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.009894 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.014611 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3b767bc6-35ad-425a-a3b7-09783d972cf0-serviceca\") pod \"node-ca-jjx4h\" (UID: \"3b767bc6-35ad-425a-a3b7-09783d972cf0\") " pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.014677 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b767bc6-35ad-425a-a3b7-09783d972cf0-host\") pod \"node-ca-jjx4h\" (UID: \"3b767bc6-35ad-425a-a3b7-09783d972cf0\") " pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.014704 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8x6\" (UniqueName: \"kubernetes.io/projected/3b767bc6-35ad-425a-a3b7-09783d972cf0-kube-api-access-9p8x6\") pod \"node-ca-jjx4h\" (UID: \"3b767bc6-35ad-425a-a3b7-09783d972cf0\") " pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.014803 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b767bc6-35ad-425a-a3b7-09783d972cf0-host\") pod \"node-ca-jjx4h\" (UID: \"3b767bc6-35ad-425a-a3b7-09783d972cf0\") " pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.015876 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3b767bc6-35ad-425a-a3b7-09783d972cf0-serviceca\") pod \"node-ca-jjx4h\" (UID: \"3b767bc6-35ad-425a-a3b7-09783d972cf0\") " pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.018952 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.028820 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.039052 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8x6\" (UniqueName: \"kubernetes.io/projected/3b767bc6-35ad-425a-a3b7-09783d972cf0-kube-api-access-9p8x6\") pod \"node-ca-jjx4h\" (UID: \"3b767bc6-35ad-425a-a3b7-09783d972cf0\") " pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.042512 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.056725 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.068912 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.077016 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.077051 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.077062 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.077082 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.077092 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:20Z","lastTransitionTime":"2026-01-21T14:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.082663 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.158294 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jjx4h" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.179616 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.179673 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.179683 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.179699 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.179709 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:20Z","lastTransitionTime":"2026-01-21T14:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:20 crc kubenswrapper[4834]: W0121 14:31:20.182797 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b767bc6_35ad_425a_a3b7_09783d972cf0.slice/crio-73653988c29451ac1ae3925c6eb6f6f1bacd775b0db2b1e334b143c6c31fc518 WatchSource:0}: Error finding container 73653988c29451ac1ae3925c6eb6f6f1bacd775b0db2b1e334b143c6c31fc518: Status 404 returned error can't find the container with id 73653988c29451ac1ae3925c6eb6f6f1bacd775b0db2b1e334b143c6c31fc518 Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.282069 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.282142 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.282154 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.282413 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.282437 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:20Z","lastTransitionTime":"2026-01-21T14:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.284141 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:34:34.114025069 +0000 UTC Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.386082 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.386128 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.386140 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.386160 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.386173 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:20Z","lastTransitionTime":"2026-01-21T14:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.491693 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.491743 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.491757 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.491787 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.491801 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:20Z","lastTransitionTime":"2026-01-21T14:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.513983 4834 generic.go:334] "Generic (PLEG): container finished" podID="a8efbee9-2d1d-473f-ad38-b10d84821e23" containerID="9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1" exitCode=0 Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.514103 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" event={"ID":"a8efbee9-2d1d-473f-ad38-b10d84821e23","Type":"ContainerDied","Data":"9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.515826 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jjx4h" event={"ID":"3b767bc6-35ad-425a-a3b7-09783d972cf0","Type":"ContainerStarted","Data":"1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.515852 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jjx4h" event={"ID":"3b767bc6-35ad-425a-a3b7-09783d972cf0","Type":"ContainerStarted","Data":"73653988c29451ac1ae3925c6eb6f6f1bacd775b0db2b1e334b143c6c31fc518"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.531957 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.547968 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.558607 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.569155 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.581122 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.590097 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.596376 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.596428 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.596441 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.596462 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.596477 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:20Z","lastTransitionTime":"2026-01-21T14:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.604952 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.617663 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.632979 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.646720 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.659792 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.671393 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.691731 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.699080 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.699118 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.699127 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.699143 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.699153 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:20Z","lastTransitionTime":"2026-01-21T14:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.710976 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.734779 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.749826 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.771720 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.784647 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.800004 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.801382 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.801421 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.801431 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.801449 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.801460 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:20Z","lastTransitionTime":"2026-01-21T14:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.813398 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.827396 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.847567 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.860843 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.874534 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.887167 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.903056 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.903815 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.903847 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.903857 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.903874 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.903884 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:20Z","lastTransitionTime":"2026-01-21T14:31:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.919461 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.932045 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.943589 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:20 crc kubenswrapper[4834]: I0121 14:31:20.953285 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:20Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.006213 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.006254 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.006264 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.006283 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.006293 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.109235 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.109269 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.109280 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.109296 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.109307 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.211883 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.211945 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.211960 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.211978 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.211989 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.285033 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:15:08.367949596 +0000 UTC Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.314621 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.314693 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.314716 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.314743 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.314763 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.324521 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.324541 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.324657 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.324660 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.324791 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.324879 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.418586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.418631 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.418641 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.418657 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.418667 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.521340 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.521469 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.521491 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.521520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.521552 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.527844 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.533036 4834 generic.go:334] "Generic (PLEG): container finished" podID="a8efbee9-2d1d-473f-ad38-b10d84821e23" containerID="143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43" exitCode=0 Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.533106 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" event={"ID":"a8efbee9-2d1d-473f-ad38-b10d84821e23","Type":"ContainerDied","Data":"143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.572368 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.595356 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.607466 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.621083 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.625377 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.625469 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.625488 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.625520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.625548 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.636059 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.652370 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.668890 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.682486 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.700776 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.723685 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.729488 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.729549 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.729559 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.729577 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.729592 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.733308 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.733396 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.733434 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.733568 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:31:29.733535533 +0000 UTC m=+35.707884578 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.733583 4834 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.733634 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:29.733619545 +0000 UTC m=+35.707968590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.733662 4834 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.733701 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:29.733693198 +0000 UTC m=+35.708042233 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.738746 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.752717 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.764549 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.781291 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.802481 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:21Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.833323 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.833359 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.833369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.833386 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.833397 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.833858 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.833908 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.834042 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.834066 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.834077 4834 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.834118 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:29.834103769 +0000 UTC m=+35.808452814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.834042 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.834144 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.834155 4834 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:21 crc kubenswrapper[4834]: E0121 14:31:21.834188 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:29.834178782 +0000 UTC m=+35.808527827 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.936753 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.936805 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.936819 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.936839 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:21 crc kubenswrapper[4834]: I0121 14:31:21.936852 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:21Z","lastTransitionTime":"2026-01-21T14:31:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.039670 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.039722 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.039733 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.039752 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.039764 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.143007 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.143086 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.143110 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.143142 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.143162 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.246091 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.246129 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.246140 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.246155 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.246166 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.285898 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:58:52.130806076 +0000 UTC Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.349304 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.349369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.349387 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.349413 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.349434 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.452874 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.452996 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.453024 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.453076 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.453105 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.543061 4834 generic.go:334] "Generic (PLEG): container finished" podID="a8efbee9-2d1d-473f-ad38-b10d84821e23" containerID="d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38" exitCode=0 Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.543129 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" event={"ID":"a8efbee9-2d1d-473f-ad38-b10d84821e23","Type":"ContainerDied","Data":"d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.557024 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.557100 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.557127 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.557159 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.557185 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.570391 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.626369 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.659873 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.659917 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.659951 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.659968 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.659979 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.662484 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.677388 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.687047 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.713564 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.732772 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.753727 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.762570 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.762609 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.762621 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.762639 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.762651 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.780632 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.800737 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.816594 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.828791 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.844467 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.859526 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.866774 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.866813 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.866821 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.866841 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.866854 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.871462 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:22Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.969762 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.969833 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.969848 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.969869 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:22 crc kubenswrapper[4834]: I0121 14:31:22.969899 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:22Z","lastTransitionTime":"2026-01-21T14:31:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.074055 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.074268 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.074286 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.074311 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.074325 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:23Z","lastTransitionTime":"2026-01-21T14:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.177429 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.177496 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.177515 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.177541 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.177565 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:23Z","lastTransitionTime":"2026-01-21T14:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.281816 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.281862 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.281873 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.281890 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.281900 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:23Z","lastTransitionTime":"2026-01-21T14:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.287061 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:50:09.778684006 +0000 UTC Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.323629 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.323760 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:23 crc kubenswrapper[4834]: E0121 14:31:23.323828 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.323760 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:23 crc kubenswrapper[4834]: E0121 14:31:23.324257 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:23 crc kubenswrapper[4834]: E0121 14:31:23.324057 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.384848 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.384896 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.384906 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.384921 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.384946 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:23Z","lastTransitionTime":"2026-01-21T14:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.487976 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.488030 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.488044 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.488066 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.488081 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:23Z","lastTransitionTime":"2026-01-21T14:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.550284 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" event={"ID":"a8efbee9-2d1d-473f-ad38-b10d84821e23","Type":"ContainerStarted","Data":"e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.555995 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.557350 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.557482 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.580311 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.585500 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.591206 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.591241 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.591249 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.591262 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.591271 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:23Z","lastTransitionTime":"2026-01-21T14:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.594118 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.598470 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.612567 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.626327 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.647420 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.671877 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.685708 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.695359 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.695622 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.695680 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.695798 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.695867 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:23Z","lastTransitionTime":"2026-01-21T14:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.698672 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.712527 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.730365 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.746677 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.760171 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.775355 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.794760 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.798986 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.799033 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.799047 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.799066 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.799078 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:23Z","lastTransitionTime":"2026-01-21T14:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.812368 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.832861 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.848075 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.861452 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.871593 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.887444 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.902288 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.902462 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.902531 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.902615 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.902709 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:23Z","lastTransitionTime":"2026-01-21T14:31:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.915469 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.929343 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.942055 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.953023 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.965738 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.981503 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:23 crc kubenswrapper[4834]: I0121 14:31:23.994392 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:23Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.005651 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.005711 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.005725 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.005744 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.005777 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.009673 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.021323 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.043748 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.109416 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.109479 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.109489 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.109506 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.109517 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.183417 4834 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.220446 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.220510 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.220522 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.220539 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.220555 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.287459 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:08:45.201070719 +0000 UTC Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.324659 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.324996 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.325088 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.325189 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.325328 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.346037 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.380909 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.412818 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.429613 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.429702 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.429725 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.429761 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.429785 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.438882 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.462859 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.483077 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.510024 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.531352 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.533617 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.533697 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.533721 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.533756 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.533780 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.547298 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.560110 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.575096 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.598128 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.610412 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.628341 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.636538 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.636590 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.636605 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.636624 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.636638 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.643361 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.658073 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.743203 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.743299 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.743325 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.743360 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.743386 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.846978 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.847476 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.847491 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.847514 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.847532 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.950253 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.950292 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.950305 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.950328 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:24 crc kubenswrapper[4834]: I0121 14:31:24.950343 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:24Z","lastTransitionTime":"2026-01-21T14:31:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.052806 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.052840 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.052848 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.052862 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.052873 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.155321 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.155386 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.155395 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.155411 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.155421 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.258564 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.258605 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.258613 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.258630 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.258640 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.288612 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:17:22.518276792 +0000 UTC Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.324201 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.324241 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.324463 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:25 crc kubenswrapper[4834]: E0121 14:31:25.324506 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:25 crc kubenswrapper[4834]: E0121 14:31:25.324672 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:25 crc kubenswrapper[4834]: E0121 14:31:25.324775 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.362094 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.362160 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.362174 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.362194 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.362209 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.464129 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.464175 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.464188 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.464208 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.464221 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.563280 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.566503 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.566534 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.566544 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.566559 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.566570 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.668781 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.668812 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.668821 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.668836 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.668845 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.771813 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.771877 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.771894 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.771912 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.771937 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.874775 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.874842 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.874863 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.874888 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.874904 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.977960 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.978021 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.978043 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.978062 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:25 crc kubenswrapper[4834]: I0121 14:31:25.978076 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:25Z","lastTransitionTime":"2026-01-21T14:31:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.081492 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.081545 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.081562 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.081583 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.081600 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:26Z","lastTransitionTime":"2026-01-21T14:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.184296 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.184339 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.184353 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.184372 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.184385 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:26Z","lastTransitionTime":"2026-01-21T14:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.287330 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.287397 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.287416 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.287439 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.287456 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:26Z","lastTransitionTime":"2026-01-21T14:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.289525 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:52:19.114283192 +0000 UTC Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.390819 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.390878 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.390888 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.390906 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.390917 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:26Z","lastTransitionTime":"2026-01-21T14:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.493639 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.493696 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.493709 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.493726 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.493738 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:26Z","lastTransitionTime":"2026-01-21T14:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.577099 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/0.log" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.583860 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526" exitCode=1 Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.583979 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.584994 4834 scope.go:117] "RemoveContainer" containerID="2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.596267 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.596319 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.596331 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.596357 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.596371 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:26Z","lastTransitionTime":"2026-01-21T14:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.599790 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.615234 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.624461 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.635747 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.649444 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.660534 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.673028 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.685687 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.698945 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.698995 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.699009 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.699029 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.699044 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:26Z","lastTransitionTime":"2026-01-21T14:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.704142 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.724730 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.737642 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.751715 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.761683 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.773196 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.792860 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:26Z\\\",\\\"message\\\":\\\"*v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764076 6108 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764475 6108 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764504 6108 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764525 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 14:31:25.764558 6108 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764648 6108 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764869 6108 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:26Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.801997 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.802082 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.802097 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.802116 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.802130 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:26Z","lastTransitionTime":"2026-01-21T14:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.905554 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.905616 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.905635 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.905660 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:26 crc kubenswrapper[4834]: I0121 14:31:26.905679 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:26Z","lastTransitionTime":"2026-01-21T14:31:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.008306 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.008374 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.008387 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.008403 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.008414 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.110496 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.110833 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.110911 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.111006 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.111064 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.213483 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.213529 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.213541 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.213561 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.213574 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.290071 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:06:40.38590203 +0000 UTC Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.320081 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.320151 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.320176 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.320408 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.320432 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.323678 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.323751 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:27 crc kubenswrapper[4834]: E0121 14:31:27.323838 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.323880 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:27 crc kubenswrapper[4834]: E0121 14:31:27.323935 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:27 crc kubenswrapper[4834]: E0121 14:31:27.323994 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.425399 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.425448 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.425464 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.425487 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.425501 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.528497 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.528546 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.528558 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.528577 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.528620 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.619188 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/0.log" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.622156 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.622357 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.631113 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.631166 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.631184 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.631209 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.631225 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.639248 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.655169 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.678107 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.693776 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.707012 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.718048 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.718088 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.718097 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.718111 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.718120 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.726382 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: E0121 14:31:27.733382 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.737482 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.737550 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.737563 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.737609 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.737627 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.742536 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: E0121 14:31:27.751588 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.755534 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.755580 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.755591 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.755607 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.755618 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.761419 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:26Z\\\",\\\"message\\\":\\\"*v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764076 6108 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764475 6108 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764504 6108 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764525 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 14:31:25.764558 6108 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764648 6108 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764869 6108 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: E0121 14:31:27.769017 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.772337 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.772454 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.772548 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.772643 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.772747 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: E0121 14:31:27.787955 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.792000 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.792037 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.792080 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.792099 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.792109 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.793218 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: E0121 14:31:27.807677 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: E0121 14:31:27.807798 4834 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.810283 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.810838 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.810869 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.810879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.810895 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.810906 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.823668 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.834690 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.846339 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.859530 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.872664 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:27Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.914690 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.914784 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.914817 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.914853 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:27 crc kubenswrapper[4834]: I0121 14:31:27.914878 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:27Z","lastTransitionTime":"2026-01-21T14:31:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.018795 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.018884 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.018914 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.018993 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.019017 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.079370 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h"] Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.079838 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.082230 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.082475 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.108834 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.119332 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4edb8fc0-f716-4a40-8028-fc796a8804bd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.119455 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4edb8fc0-f716-4a40-8028-fc796a8804bd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.119492 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5pv\" (UniqueName: \"kubernetes.io/projected/4edb8fc0-f716-4a40-8028-fc796a8804bd-kube-api-access-wn5pv\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.119537 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4edb8fc0-f716-4a40-8028-fc796a8804bd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.130464 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.130536 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.130554 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.130583 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.130607 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.135632 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.149044 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.168584 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.210029 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.220148 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4edb8fc0-f716-4a40-8028-fc796a8804bd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.220185 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn5pv\" (UniqueName: \"kubernetes.io/projected/4edb8fc0-f716-4a40-8028-fc796a8804bd-kube-api-access-wn5pv\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.220213 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4edb8fc0-f716-4a40-8028-fc796a8804bd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.220257 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4edb8fc0-f716-4a40-8028-fc796a8804bd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.220989 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4edb8fc0-f716-4a40-8028-fc796a8804bd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.221241 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4edb8fc0-f716-4a40-8028-fc796a8804bd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.230711 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4edb8fc0-f716-4a40-8028-fc796a8804bd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.233249 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.233300 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.233315 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.233339 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.233355 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.244413 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:26Z\\\",\\\"message\\\":\\\"*v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764076 6108 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764475 6108 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764504 6108 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764525 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 14:31:25.764558 6108 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764648 6108 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764869 6108 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.251412 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn5pv\" (UniqueName: \"kubernetes.io/projected/4edb8fc0-f716-4a40-8028-fc796a8804bd-kube-api-access-wn5pv\") pod \"ovnkube-control-plane-749d76644c-95s5h\" (UID: \"4edb8fc0-f716-4a40-8028-fc796a8804bd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.269919 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.288217 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.291752 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:21:53.820680967 +0000 UTC Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.301631 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.314454 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.331249 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.336802 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.336866 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.336886 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.336955 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.336980 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.351500 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.366509 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.379292 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.397687 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.405042 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.412231 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.440253 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.440337 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.440362 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.440392 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.440413 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.543444 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.543515 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.543537 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.543565 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.543587 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.629056 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" event={"ID":"4edb8fc0-f716-4a40-8028-fc796a8804bd","Type":"ContainerStarted","Data":"fbf6457cca647a3b1c10086432b515b5506b2edd9590f3d4a8475f523899c4a1"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.632116 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/1.log" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.633047 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/0.log" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.639112 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272" exitCode=1 Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.639169 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.639220 4834 scope.go:117] "RemoveContainer" containerID="2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.640105 4834 scope.go:117] "RemoveContainer" containerID="787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272" Jan 21 14:31:28 crc kubenswrapper[4834]: E0121 14:31:28.640309 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.647257 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.647313 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.647333 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.647359 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.647379 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.666414 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.688817 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.704727 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.720991 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.741196 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.750620 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.750663 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.750681 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.750704 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.750720 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.757741 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.776823 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.792013 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.812814 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.827951 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.849825 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.854705 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.854760 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.854779 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.854806 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.854827 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.866205 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.880669 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.893140 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.908526 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.929215 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:26Z\\\",\\\"message\\\":\\\"*v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764076 6108 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764475 6108 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764504 6108 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764525 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 14:31:25.764558 6108 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764648 6108 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764869 6108 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"actory.egressNode crc took: 8.731769ms\\\\nI0121 14:31:27.627349 6231 factory.go:1336] Added *v1.Node event handler 7\\\\nI0121 14:31:27.627426 6231 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627443 6231 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:31:27.627496 6231 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:31:27.627515 6231 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:31:27.627530 6231 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:31:27.627558 6231 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:31:27.627564 6231 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 14:31:27.627587 6231 factory.go:656] Stopping watch factory\\\\nI0121 14:31:27.627583 6231 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627618 6231 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 14:31:27.627961 6231 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 14:31:27.628105 6231 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 14:31:27.628193 6231 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:31:27.628249 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:31:27.628417 6231 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.958095 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.958217 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.958231 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.958250 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:28 crc kubenswrapper[4834]: I0121 14:31:28.958263 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:28Z","lastTransitionTime":"2026-01-21T14:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.061529 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.061559 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.061569 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.061583 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.061592 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:29Z","lastTransitionTime":"2026-01-21T14:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.165887 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.166001 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.166027 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.166055 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.166073 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:29Z","lastTransitionTime":"2026-01-21T14:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.270850 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.270971 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.271022 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.271062 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.271144 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:29Z","lastTransitionTime":"2026-01-21T14:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.292970 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:48:18.727279418 +0000 UTC Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.323622 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.323713 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.323850 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.324092 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.324613 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.324806 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.376609 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.376665 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.376677 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.376696 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.376711 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:29Z","lastTransitionTime":"2026-01-21T14:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.482532 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.482619 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.482678 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.482765 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.482786 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:29Z","lastTransitionTime":"2026-01-21T14:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.587763 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.588557 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.588677 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.588718 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.588745 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:29Z","lastTransitionTime":"2026-01-21T14:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.603258 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dtqf2"] Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.603863 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.603971 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.628720 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.637132 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.637197 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8vs7\" (UniqueName: \"kubernetes.io/projected/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-kube-api-access-g8vs7\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.643480 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.657450 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.670881 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.684884 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.692268 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.692324 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.692350 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.692378 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.692394 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:29Z","lastTransitionTime":"2026-01-21T14:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.697000 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.712711 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.727427 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.737995 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.738169 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:31:45.738141148 +0000 UTC m=+51.712490203 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.738223 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.738287 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8vs7\" (UniqueName: \"kubernetes.io/projected/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-kube-api-access-g8vs7\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.738316 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.738414 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.738424 4834 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.738514 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs podName:d31034df-9ceb-49b0-9ad5-334dcaa28fa4 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:30.238475219 +0000 UTC m=+36.212824264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs") pod "network-metrics-daemon-dtqf2" (UID: "d31034df-9ceb-49b0-9ad5-334dcaa28fa4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.738520 4834 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.738568 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:45.738559061 +0000 UTC m=+51.712908116 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.738651 4834 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.738751 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:45.738730167 +0000 UTC m=+51.713079202 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.743305 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.754492 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.763505 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8vs7\" (UniqueName: \"kubernetes.io/projected/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-kube-api-access-g8vs7\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.770213 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.784373 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.796511 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.796553 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.796562 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.796578 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.796589 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:29Z","lastTransitionTime":"2026-01-21T14:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.797359 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.817559 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.840251 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.840058 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:26Z\\\",\\\"message\\\":\\\"*v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764076 6108 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764475 6108 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764504 6108 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764525 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 14:31:25.764558 6108 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764648 6108 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764869 6108 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"actory.egressNode crc took: 8.731769ms\\\\nI0121 14:31:27.627349 6231 factory.go:1336] Added *v1.Node event handler 7\\\\nI0121 14:31:27.627426 6231 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627443 6231 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:31:27.627496 6231 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:31:27.627515 6231 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:31:27.627530 6231 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:31:27.627558 6231 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:31:27.627564 6231 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 14:31:27.627587 6231 factory.go:656] Stopping watch factory\\\\nI0121 14:31:27.627583 6231 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627618 6231 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 14:31:27.627961 6231 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 14:31:27.628105 6231 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 14:31:27.628193 6231 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:31:27.628249 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:31:27.628417 6231 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.840546 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.840604 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.840631 4834 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.840745 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:45.840713558 +0000 UTC m=+51.815062643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.841144 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.841181 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.841208 4834 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:29 crc kubenswrapper[4834]: E0121 14:31:29.841293 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:45.841266935 +0000 UTC m=+51.815616010 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.840911 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.856601 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.887188 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:29Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.899707 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.899758 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.899768 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.899788 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:29 crc kubenswrapper[4834]: I0121 14:31:29.899801 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:29Z","lastTransitionTime":"2026-01-21T14:31:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.003785 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.003855 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.003874 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.003904 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.003973 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.107074 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.107113 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.107121 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.107136 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.107149 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.210457 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.210530 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.210551 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.210578 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.210596 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.246776 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:30 crc kubenswrapper[4834]: E0121 14:31:30.246952 4834 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:30 crc kubenswrapper[4834]: E0121 14:31:30.247021 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs podName:d31034df-9ceb-49b0-9ad5-334dcaa28fa4 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:31.247000956 +0000 UTC m=+37.221350011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs") pod "network-metrics-daemon-dtqf2" (UID: "d31034df-9ceb-49b0-9ad5-334dcaa28fa4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.293659 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:40:05.358382601 +0000 UTC Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.317411 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.317480 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.317493 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.317516 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.317534 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.420485 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.420581 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.420608 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.420643 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.420671 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.524463 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.524547 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.524576 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.524623 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.524665 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.627096 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.627160 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.627170 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.627188 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.627204 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.650544 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/1.log" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.655864 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" event={"ID":"4edb8fc0-f716-4a40-8028-fc796a8804bd","Type":"ContainerStarted","Data":"ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.730498 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.730564 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.730580 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.730600 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.730615 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.834002 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.834688 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.834705 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.834784 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.834805 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.937360 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.937424 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.937442 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.937467 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:30 crc kubenswrapper[4834]: I0121 14:31:30.937484 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:30Z","lastTransitionTime":"2026-01-21T14:31:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.040326 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.040366 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.040376 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.040393 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.040404 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.143255 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.143307 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.143323 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.143337 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.143348 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.246124 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.246164 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.246172 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.246189 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.246200 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.258874 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:31 crc kubenswrapper[4834]: E0121 14:31:31.259070 4834 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:31 crc kubenswrapper[4834]: E0121 14:31:31.259145 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs podName:d31034df-9ceb-49b0-9ad5-334dcaa28fa4 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:33.259124472 +0000 UTC m=+39.233473517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs") pod "network-metrics-daemon-dtqf2" (UID: "d31034df-9ceb-49b0-9ad5-334dcaa28fa4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.293801 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:25:42.189032231 +0000 UTC Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.324241 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.324271 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.324339 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.324382 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:31 crc kubenswrapper[4834]: E0121 14:31:31.324381 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:31 crc kubenswrapper[4834]: E0121 14:31:31.324497 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:31 crc kubenswrapper[4834]: E0121 14:31:31.324658 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:31 crc kubenswrapper[4834]: E0121 14:31:31.324784 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.348911 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.349004 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.349024 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.349050 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.349069 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.452168 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.452228 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.452240 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.452259 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.452273 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.556298 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.556354 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.556371 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.556391 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.556408 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.660023 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.660107 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.660154 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.660173 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.660190 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.661848 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" event={"ID":"4edb8fc0-f716-4a40-8028-fc796a8804bd","Type":"ContainerStarted","Data":"8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.678658 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.694224 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.707096 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.720898 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.733074 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.748085 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.762185 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.762745 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.762901 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.763021 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.763108 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.763178 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.774795 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.793860 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.810810 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.825828 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.838132 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.851223 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.865753 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.865802 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.865814 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.865833 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.865846 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.869913 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:26Z\\\",\\\"message\\\":\\\"*v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764076 6108 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764475 6108 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764504 6108 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764525 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 14:31:25.764558 6108 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764648 6108 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764869 6108 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"actory.egressNode crc took: 8.731769ms\\\\nI0121 14:31:27.627349 6231 factory.go:1336] Added *v1.Node event handler 7\\\\nI0121 14:31:27.627426 6231 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627443 6231 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:31:27.627496 6231 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:31:27.627515 6231 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:31:27.627530 6231 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:31:27.627558 6231 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:31:27.627564 6231 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 14:31:27.627587 6231 factory.go:656] Stopping watch factory\\\\nI0121 14:31:27.627583 6231 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627618 6231 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 14:31:27.627961 6231 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 14:31:27.628105 6231 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 14:31:27.628193 6231 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:31:27.628249 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:31:27.628417 6231 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.881897 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.893099 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.902260 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:31Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.967551 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.967593 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.967603 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.967619 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:31 crc kubenswrapper[4834]: I0121 14:31:31.967633 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:31Z","lastTransitionTime":"2026-01-21T14:31:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.070611 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.070690 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.070713 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.070744 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.070768 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:32Z","lastTransitionTime":"2026-01-21T14:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.173838 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.173899 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.173911 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.173951 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.173965 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:32Z","lastTransitionTime":"2026-01-21T14:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.277124 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.277180 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.277195 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.277215 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.277230 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:32Z","lastTransitionTime":"2026-01-21T14:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.294545 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:20:00.894754945 +0000 UTC Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.380112 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.380168 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.380186 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.380214 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.380231 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:32Z","lastTransitionTime":"2026-01-21T14:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.483266 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.483309 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.483319 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.483339 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.483351 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:32Z","lastTransitionTime":"2026-01-21T14:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.586062 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.586107 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.586119 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.586136 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.586150 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:32Z","lastTransitionTime":"2026-01-21T14:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.689447 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.689486 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.689496 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.689512 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.689523 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:32Z","lastTransitionTime":"2026-01-21T14:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.792485 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.792524 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.792535 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.792551 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.792561 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:32Z","lastTransitionTime":"2026-01-21T14:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.894862 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.895344 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.895417 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.895520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:32 crc kubenswrapper[4834]: I0121 14:31:32.895585 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:32Z","lastTransitionTime":"2026-01-21T14:31:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.002213 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.002250 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.002260 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.002275 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.002284 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.104807 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.104836 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.104844 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.104858 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.104867 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.207464 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.207785 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.207855 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.208086 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.208250 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.281867 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:33 crc kubenswrapper[4834]: E0121 14:31:33.282143 4834 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:33 crc kubenswrapper[4834]: E0121 14:31:33.282406 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs podName:d31034df-9ceb-49b0-9ad5-334dcaa28fa4 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:37.282383643 +0000 UTC m=+43.256732698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs") pod "network-metrics-daemon-dtqf2" (UID: "d31034df-9ceb-49b0-9ad5-334dcaa28fa4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.296249 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:08:12.532180094 +0000 UTC Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.311200 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.311235 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.311244 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.311259 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.311269 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.324004 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.324098 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:33 crc kubenswrapper[4834]: E0121 14:31:33.324158 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.324007 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.324220 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:33 crc kubenswrapper[4834]: E0121 14:31:33.324272 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:33 crc kubenswrapper[4834]: E0121 14:31:33.324416 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:33 crc kubenswrapper[4834]: E0121 14:31:33.324539 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.415110 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.415199 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.415220 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.415251 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.415271 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.519556 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.519617 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.519629 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.519647 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.519659 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.622549 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.622603 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.622613 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.622629 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.622640 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.725496 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.725550 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.725569 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.725595 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.725613 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.828225 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.828259 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.828268 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.828282 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.828292 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.930840 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.930900 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.931018 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.931045 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:33 crc kubenswrapper[4834]: I0121 14:31:33.931061 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:33Z","lastTransitionTime":"2026-01-21T14:31:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.033868 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.033951 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.033964 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.033987 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.034004 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.136543 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.136586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.136599 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.136617 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.136627 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.240436 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.240480 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.240493 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.240513 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.240527 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.297602 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:01:09.910758023 +0000 UTC Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.343224 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.343266 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.343282 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.343299 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.343310 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.345424 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.358062 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.372314 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.391372 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e560091b06323904634ef4ac4e20ab4489c85b7d4034e47aa9880b0dc0bc526\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:26Z\\\",\\\"message\\\":\\\"*v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764076 6108 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764475 6108 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764504 6108 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764525 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 14:31:25.764558 6108 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 14:31:25.764648 6108 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 14:31:25.764869 6108 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"actory.egressNode crc took: 8.731769ms\\\\nI0121 14:31:27.627349 6231 factory.go:1336] Added *v1.Node event handler 7\\\\nI0121 14:31:27.627426 6231 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627443 6231 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:31:27.627496 6231 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:31:27.627515 6231 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:31:27.627530 6231 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:31:27.627558 6231 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:31:27.627564 6231 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 14:31:27.627587 6231 factory.go:656] Stopping watch factory\\\\nI0121 14:31:27.627583 6231 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627618 6231 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 14:31:27.627961 6231 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 14:31:27.628105 6231 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 14:31:27.628193 6231 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:31:27.628249 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:31:27.628417 6231 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.403040 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.424466 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.438889 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.446422 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.446472 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.446485 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.446513 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.446525 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.449222 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.459498 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.470098 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.482295 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.492142 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.504066 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.517023 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.529229 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.546167 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.549238 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.549428 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.549507 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.549586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.549660 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.560517 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.653222 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.653625 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.653712 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.653854 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.653952 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.757738 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.758216 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.758345 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.758503 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.758959 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.861587 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.861658 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.861671 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.861694 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.861707 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.964592 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.964668 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.964680 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.964727 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:34 crc kubenswrapper[4834]: I0121 14:31:34.964741 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:34Z","lastTransitionTime":"2026-01-21T14:31:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.068108 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.068158 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.068167 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.068183 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.068196 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:35Z","lastTransitionTime":"2026-01-21T14:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.170687 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.170739 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.170759 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.170783 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.170803 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:35Z","lastTransitionTime":"2026-01-21T14:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.274772 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.274843 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.274861 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.274887 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.274915 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:35Z","lastTransitionTime":"2026-01-21T14:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.298163 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:37:02.419370469 +0000 UTC Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.323886 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:35 crc kubenswrapper[4834]: E0121 14:31:35.324158 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.324247 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.324396 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.324470 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:35 crc kubenswrapper[4834]: E0121 14:31:35.324415 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:35 crc kubenswrapper[4834]: E0121 14:31:35.324522 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:35 crc kubenswrapper[4834]: E0121 14:31:35.324676 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.383875 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.384183 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.384317 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.384667 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.385145 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:35Z","lastTransitionTime":"2026-01-21T14:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.489774 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.489846 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.489864 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.489896 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.489917 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:35Z","lastTransitionTime":"2026-01-21T14:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.593318 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.593389 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.593413 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.593447 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.593472 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:35Z","lastTransitionTime":"2026-01-21T14:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.695977 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.696252 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.696450 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.696764 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.697016 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:35Z","lastTransitionTime":"2026-01-21T14:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.799736 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.800131 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.800331 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.800571 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.800793 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:35Z","lastTransitionTime":"2026-01-21T14:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.904432 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.904512 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.904532 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.904557 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:35 crc kubenswrapper[4834]: I0121 14:31:35.904579 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:35Z","lastTransitionTime":"2026-01-21T14:31:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.007725 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.008178 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.008443 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.008909 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.009386 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.113048 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.113123 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.113137 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.113227 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.113244 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.216525 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.216988 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.217133 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.217298 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.217417 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.299067 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:18:42.884664754 +0000 UTC Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.320852 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.320965 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.320991 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.321024 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.321050 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.423776 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.424137 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.424229 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.424403 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.424493 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.527094 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.527138 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.527203 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.527225 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.527251 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.630467 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.630516 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.630529 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.630549 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.630564 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.733667 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.733705 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.733714 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.733732 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.733743 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.836616 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.836657 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.836667 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.836682 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.836694 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.939645 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.940000 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.940092 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.940209 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:36 crc kubenswrapper[4834]: I0121 14:31:36.940291 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:36Z","lastTransitionTime":"2026-01-21T14:31:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.043729 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.044150 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.044307 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.044423 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.044505 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.147612 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.147984 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.148051 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.148160 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.148227 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.251310 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.251359 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.251370 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.251398 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.251409 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.300307 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:18:56.823869344 +0000 UTC Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.323738 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.323804 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.323868 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.323912 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.323885 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.324407 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.324399 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.324529 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.327266 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.327401 4834 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.327463 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs podName:d31034df-9ceb-49b0-9ad5-334dcaa28fa4 nodeName:}" failed. No retries permitted until 2026-01-21 14:31:45.327443968 +0000 UTC m=+51.301793023 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs") pod "network-metrics-daemon-dtqf2" (UID: "d31034df-9ceb-49b0-9ad5-334dcaa28fa4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.354793 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.355214 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.355440 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.355752 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.356040 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.458795 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.458844 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.458862 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.458888 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.458902 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.562279 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.562343 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.562359 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.562384 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.562400 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.665341 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.665395 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.665405 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.665431 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.665446 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.768995 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.769045 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.769061 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.769088 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.769106 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.872365 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.872689 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.872786 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.872878 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.872974 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.892657 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.892704 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.892716 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.892737 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.892749 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.906827 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:37Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.911478 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.911518 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.911528 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.911544 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.911555 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.926598 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:37Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.930835 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.930874 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.930886 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.930903 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.930917 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.945458 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:37Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.950400 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.950435 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.950447 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.950464 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.950474 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.963767 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:37Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.969421 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.969528 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.969549 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.969606 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.969630 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.986027 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:37Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:37 crc kubenswrapper[4834]: E0121 14:31:37.986153 4834 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.990143 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.990235 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.990252 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.990274 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:37 crc kubenswrapper[4834]: I0121 14:31:37.990290 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:37Z","lastTransitionTime":"2026-01-21T14:31:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.094091 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.094167 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.094196 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.094278 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.094306 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:38Z","lastTransitionTime":"2026-01-21T14:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.197851 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.197902 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.197912 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.197957 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.197969 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:38Z","lastTransitionTime":"2026-01-21T14:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.301192 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:02:39.781331202 +0000 UTC Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.303049 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.303249 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.303460 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.303646 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.303849 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:38Z","lastTransitionTime":"2026-01-21T14:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.407881 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.408011 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.408040 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.408079 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.408107 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:38Z","lastTransitionTime":"2026-01-21T14:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.511591 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.511654 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.511668 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.511689 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.511702 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:38Z","lastTransitionTime":"2026-01-21T14:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.614733 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.614801 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.614814 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.614834 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.614845 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:38Z","lastTransitionTime":"2026-01-21T14:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.719364 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.719775 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.719871 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.720012 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.720110 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:38Z","lastTransitionTime":"2026-01-21T14:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.823869 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.823914 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.823944 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.823964 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.823978 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:38Z","lastTransitionTime":"2026-01-21T14:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.926866 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.926904 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.926916 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.926955 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.926966 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:38Z","lastTransitionTime":"2026-01-21T14:31:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.939137 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.939986 4834 scope.go:117] "RemoveContainer" containerID="787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.956665 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:38 crc kubenswrapper[4834]: I0121 14:31:38.994434 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:38Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.017750 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.030785 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.030832 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.030845 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.030891 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.030910 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.041921 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.056897 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.071256 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.095014 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"actory.egressNode crc took: 8.731769ms\\\\nI0121 14:31:27.627349 6231 factory.go:1336] Added *v1.Node event handler 7\\\\nI0121 14:31:27.627426 6231 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627443 6231 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:31:27.627496 6231 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:31:27.627515 6231 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:31:27.627530 6231 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:31:27.627558 6231 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:31:27.627564 6231 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 14:31:27.627587 6231 factory.go:656] Stopping watch factory\\\\nI0121 14:31:27.627583 6231 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627618 6231 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 14:31:27.627961 6231 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 14:31:27.628105 6231 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 14:31:27.628193 6231 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:31:27.628249 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:31:27.628417 6231 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.110607 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.126060 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.133631 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.133664 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.133675 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.133691 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.133702 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.138599 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.151877 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.168617 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.178886 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.191864 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.201770 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.214729 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.226201 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.237079 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.237101 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.237110 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.237124 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.237134 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.301786 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:33:26.184542123 +0000 UTC Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.324642 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.324702 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.324764 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:39 crc kubenswrapper[4834]: E0121 14:31:39.324800 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.324707 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:39 crc kubenswrapper[4834]: E0121 14:31:39.325059 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:39 crc kubenswrapper[4834]: E0121 14:31:39.325187 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:39 crc kubenswrapper[4834]: E0121 14:31:39.325350 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.339494 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.339544 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.339558 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.339577 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.339599 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.442109 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.442545 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.442557 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.442576 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.442590 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.545156 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.545198 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.545210 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.545229 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.545242 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.648576 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.648620 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.648628 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.648645 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.648655 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.698887 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/1.log" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.701736 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.702298 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.720480 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.733191 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.743187 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.750536 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.750581 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.750593 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.750610 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.750621 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.755886 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.775853 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"actory.egressNode crc took: 8.731769ms\\\\nI0121 14:31:27.627349 6231 factory.go:1336] Added *v1.Node event handler 7\\\\nI0121 14:31:27.627426 6231 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627443 6231 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:31:27.627496 6231 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:31:27.627515 6231 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:31:27.627530 6231 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:31:27.627558 6231 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:31:27.627564 6231 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 14:31:27.627587 6231 factory.go:656] Stopping watch factory\\\\nI0121 14:31:27.627583 6231 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627618 6231 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 14:31:27.627961 6231 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 14:31:27.628105 6231 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 14:31:27.628193 6231 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:31:27.628249 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:31:27.628417 6231 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.788303 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.807693 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.822988 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.841894 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.853700 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.853740 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.853749 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.853765 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.853778 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.854992 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.865153 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.873501 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.882288 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.895908 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.908861 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.920042 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.933261 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:39Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.956315 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.956370 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.956384 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.956407 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:39 crc kubenswrapper[4834]: I0121 14:31:39.956423 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:39Z","lastTransitionTime":"2026-01-21T14:31:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.059027 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.059076 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.059085 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.059102 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.059113 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.162247 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.162285 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.162296 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.162315 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.162327 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.264631 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.264697 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.264710 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.264730 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.264743 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.302142 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:17:03.02332763 +0000 UTC Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.366892 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.366963 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.366975 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.366996 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.367008 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.469550 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.469586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.469594 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.469609 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.469618 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.571661 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.571711 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.571724 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.571744 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.571758 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.674136 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.674182 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.674192 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.674208 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.674218 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.707549 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/2.log" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.708375 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/1.log" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.711295 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18" exitCode=1 Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.711344 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.711389 4834 scope.go:117] "RemoveContainer" containerID="787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.715085 4834 scope.go:117] "RemoveContainer" containerID="50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18" Jan 21 14:31:40 crc kubenswrapper[4834]: E0121 14:31:40.715314 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.726800 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.739544 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.750509 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.765189 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.776615 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.776639 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.776647 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.776662 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.776673 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.779733 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.792296 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.803113 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.813728 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.828713 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.840475 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.858015 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.870141 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.878956 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.878993 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.879006 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.879023 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.879036 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.882403 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.892229 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.906777 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.924890 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://787063c9e7c13def8404330a269e52c2901d6452473bd6d850e75b8cb69c6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:27Z\\\",\\\"message\\\":\\\"actory.egressNode crc took: 8.731769ms\\\\nI0121 14:31:27.627349 6231 factory.go:1336] Added *v1.Node event handler 7\\\\nI0121 14:31:27.627426 6231 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627443 6231 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 14:31:27.627496 6231 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 14:31:27.627515 6231 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 14:31:27.627530 6231 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 14:31:27.627558 6231 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 14:31:27.627564 6231 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 14:31:27.627587 6231 factory.go:656] Stopping watch factory\\\\nI0121 14:31:27.627583 6231 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 14:31:27.627618 6231 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 14:31:27.627961 6231 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 14:31:27.628105 6231 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 14:31:27.628193 6231 ovnkube.go:599] Stopped ovnkube\\\\nI0121 14:31:27.628249 6231 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 14:31:27.628417 6231 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.937263 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:40Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.981882 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.981957 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.981967 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.981983 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:40 crc kubenswrapper[4834]: I0121 14:31:40.981994 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:40Z","lastTransitionTime":"2026-01-21T14:31:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.084629 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.084669 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.084681 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.084702 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.084715 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:41Z","lastTransitionTime":"2026-01-21T14:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.187389 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.187434 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.187515 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.187537 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.187546 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:41Z","lastTransitionTime":"2026-01-21T14:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.289762 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.290132 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.290202 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.290277 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.290395 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:41Z","lastTransitionTime":"2026-01-21T14:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.303304 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:46:25.548363138 +0000 UTC Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.323599 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.323609 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:41 crc kubenswrapper[4834]: E0121 14:31:41.324171 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.323662 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:41 crc kubenswrapper[4834]: E0121 14:31:41.324313 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.323640 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:41 crc kubenswrapper[4834]: E0121 14:31:41.324548 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:41 crc kubenswrapper[4834]: E0121 14:31:41.324448 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.393820 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.393864 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.393875 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.393891 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.393901 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:41Z","lastTransitionTime":"2026-01-21T14:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.496197 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.496507 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.496600 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.496746 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.496813 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:41Z","lastTransitionTime":"2026-01-21T14:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.599784 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.599838 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.599848 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.599863 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.599874 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:41Z","lastTransitionTime":"2026-01-21T14:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.702108 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.702152 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.702165 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.702180 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.702189 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:41Z","lastTransitionTime":"2026-01-21T14:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.715759 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/2.log" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.718877 4834 scope.go:117] "RemoveContainer" containerID="50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18" Jan 21 14:31:41 crc kubenswrapper[4834]: E0121 14:31:41.719050 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.729340 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.740128 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.751673 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.763631 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.774967 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.792002 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.804828 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.804886 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.804939 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.804950 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.804965 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.804974 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:41Z","lastTransitionTime":"2026-01-21T14:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.814209 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.825466 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.843829 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.857575 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.876784 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.890854 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.902198 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.907615 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.907646 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.907656 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.907672 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.907683 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:41Z","lastTransitionTime":"2026-01-21T14:31:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.912751 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.925029 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:41 crc kubenswrapper[4834]: I0121 14:31:41.935339 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:41Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.010157 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.010194 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.010204 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.010218 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.010228 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.113198 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.113232 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.113242 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.113256 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.113270 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.216400 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.216474 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.216483 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.216499 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.216508 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.303965 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:40:50.446769941 +0000 UTC Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.320520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.320579 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.320595 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.320617 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.320631 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.423340 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.423377 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.423386 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.423401 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.423411 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.525837 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.525878 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.525889 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.525905 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.525914 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.629969 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.630019 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.630041 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.630063 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.630080 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.732863 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.732944 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.732962 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.732986 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.733000 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.835593 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.835646 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.835659 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.835677 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.835690 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.938294 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.938337 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.938347 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.938363 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:42 crc kubenswrapper[4834]: I0121 14:31:42.938375 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:42Z","lastTransitionTime":"2026-01-21T14:31:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.041021 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.041063 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.041073 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.041088 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.041097 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.143282 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.143337 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.143360 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.143375 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.143384 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.245596 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.245679 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.245709 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.245742 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.245765 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.305050 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:21:59.562005059 +0000 UTC Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.324547 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.324594 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.324689 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.324957 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:43 crc kubenswrapper[4834]: E0121 14:31:43.324910 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:43 crc kubenswrapper[4834]: E0121 14:31:43.325112 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:43 crc kubenswrapper[4834]: E0121 14:31:43.325444 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:43 crc kubenswrapper[4834]: E0121 14:31:43.325294 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.348992 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.349118 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.349136 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.349162 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.349182 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.452207 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.452287 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.452308 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.452338 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.452359 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.555201 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.555274 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.555295 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.555316 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.555332 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.658531 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.658587 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.658602 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.658623 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.658637 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.760953 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.761011 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.761032 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.761060 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.761082 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.863256 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.863299 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.863310 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.863329 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.863343 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.965789 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.965847 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.965856 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.965873 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:43 crc kubenswrapper[4834]: I0121 14:31:43.965901 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:43Z","lastTransitionTime":"2026-01-21T14:31:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.068669 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.068705 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.068713 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.068727 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.068737 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.171565 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.171619 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.171630 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.171649 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.171666 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.274842 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.274883 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.274914 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.274964 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.274978 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.305358 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:14:17.00680447 +0000 UTC Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.334377 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.349461 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.363779 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.376230 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.377292 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.377367 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.377380 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.377399 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.377413 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.397675 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.422346 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.439781 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.456154 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.474711 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.479772 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.479836 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.479856 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.479881 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.479908 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.496339 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.512252 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.543320 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.553629 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.566279 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.570608 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.582576 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.582631 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.582650 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.582676 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.582694 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.591181 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.604663 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.618007 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.630544 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.640594 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.653276 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.668498 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.684115 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.686870 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.686980 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.687001 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.687034 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.687062 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.696530 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.711634 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.727279 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.739577 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.756823 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.769306 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.781689 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.791090 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.791142 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.791155 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.791174 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.791185 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.797429 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.814826 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.837238 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.856989 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.873391 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.895302 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.895611 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.895749 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.895826 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.895951 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.897941 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.914830 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:44Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.999657 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.999741 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.999756 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.999778 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:44 crc kubenswrapper[4834]: I0121 14:31:44.999795 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:44Z","lastTransitionTime":"2026-01-21T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.102948 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.102991 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.103001 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.103025 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.103041 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:45Z","lastTransitionTime":"2026-01-21T14:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.205791 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.206183 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.206259 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.206424 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.206499 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:45Z","lastTransitionTime":"2026-01-21T14:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.306375 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:57:32.646108866 +0000 UTC Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.309125 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.309195 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.309215 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.309241 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.309263 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:45Z","lastTransitionTime":"2026-01-21T14:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.323621 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.323643 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.323706 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.323762 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.323760 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.323897 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.324008 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.324074 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.412842 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.412884 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.412896 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.412916 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.412946 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:45Z","lastTransitionTime":"2026-01-21T14:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.422909 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.423136 4834 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.423200 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs podName:d31034df-9ceb-49b0-9ad5-334dcaa28fa4 nodeName:}" failed. No retries permitted until 2026-01-21 14:32:01.423183088 +0000 UTC m=+67.397532133 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs") pod "network-metrics-daemon-dtqf2" (UID: "d31034df-9ceb-49b0-9ad5-334dcaa28fa4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.517107 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.517162 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.517176 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.517195 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.517208 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:45Z","lastTransitionTime":"2026-01-21T14:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.619519 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.619575 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.619585 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.619604 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.619615 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:45Z","lastTransitionTime":"2026-01-21T14:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.722462 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.722508 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.722518 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.722535 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.722549 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:45Z","lastTransitionTime":"2026-01-21T14:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.825462 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.825498 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.825507 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.825522 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.825532 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:45Z","lastTransitionTime":"2026-01-21T14:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.827074 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.827224 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:32:17.827202834 +0000 UTC m=+83.801551879 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.827288 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.827339 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.827368 4834 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.827412 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:32:17.82740376 +0000 UTC m=+83.801752795 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.827529 4834 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.827641 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:32:17.827610807 +0000 UTC m=+83.801959872 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.927997 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.928061 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.928192 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.928217 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.928233 4834 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.928277 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:32:17.928260364 +0000 UTC m=+83.902609409 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.928367 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.928409 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.928431 4834 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:45 crc kubenswrapper[4834]: E0121 14:31:45.928501 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:32:17.928476441 +0000 UTC m=+83.902825536 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.928688 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.928739 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.928756 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.928781 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:45 crc kubenswrapper[4834]: I0121 14:31:45.928798 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:45Z","lastTransitionTime":"2026-01-21T14:31:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.031273 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.031320 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.031332 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.031351 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.031363 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.133552 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.133691 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.133709 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.133735 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.133757 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.236706 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.236748 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.236761 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.236785 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.236802 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.308132 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:05:25.102901496 +0000 UTC Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.339547 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.339604 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.339616 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.339638 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.339650 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.442942 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.442993 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.443007 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.443024 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.443037 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.546698 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.546787 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.546804 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.546825 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.546839 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.650174 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.650227 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.650238 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.650259 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.650272 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.754230 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.754363 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.754379 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.754406 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.754431 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.857687 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.857745 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.857762 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.857788 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.857804 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.961178 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.961225 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.961234 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.961251 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:46 crc kubenswrapper[4834]: I0121 14:31:46.961261 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:46Z","lastTransitionTime":"2026-01-21T14:31:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.065246 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.065314 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.065329 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.065376 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.065394 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.168169 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.168235 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.168252 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.168277 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.168295 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.271658 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.271733 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.271747 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.271767 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.271781 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.308949 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:49:42.426742865 +0000 UTC Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.323600 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.323682 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.323600 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:47 crc kubenswrapper[4834]: E0121 14:31:47.323832 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.324095 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:47 crc kubenswrapper[4834]: E0121 14:31:47.324258 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:47 crc kubenswrapper[4834]: E0121 14:31:47.324376 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:47 crc kubenswrapper[4834]: E0121 14:31:47.324530 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.375126 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.375250 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.375269 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.375296 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.375314 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.478679 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.478721 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.478733 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.478752 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.478766 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.581465 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.581521 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.581534 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.581554 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.581568 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.684567 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.684662 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.684679 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.684706 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.684757 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.788171 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.788228 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.788241 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.788260 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.788272 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.891002 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.891047 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.891062 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.891087 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.891104 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.995236 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.995310 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.995335 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.995369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:47 crc kubenswrapper[4834]: I0121 14:31:47.995395 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:47Z","lastTransitionTime":"2026-01-21T14:31:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.098977 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.099042 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.099053 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.099076 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.099089 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.113544 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.113615 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.113631 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.113656 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.113674 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: E0121 14:31:48.129156 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.134172 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.134223 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.134236 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.134255 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.134272 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: E0121 14:31:48.149984 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.155305 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.155425 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.155466 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.155521 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.155548 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: E0121 14:31:48.173534 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.178241 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.178279 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.178291 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.178315 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.178331 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: E0121 14:31:48.201437 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.205474 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.205514 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.205531 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.205558 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.205571 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: E0121 14:31:48.219404 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:48Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:48 crc kubenswrapper[4834]: E0121 14:31:48.219582 4834 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.220971 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.221001 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.221013 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.221031 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.221042 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.309635 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:16:57.609159579 +0000 UTC Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.324194 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.324245 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.324272 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.324294 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.324310 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.427146 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.427197 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.427211 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.427232 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.427252 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.530008 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.530056 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.530072 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.530095 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.530111 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.633152 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.633211 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.633221 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.633240 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.633252 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.736424 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.736482 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.736494 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.736511 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.736522 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.839530 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.839610 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.839626 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.839653 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.839672 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.942138 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.942197 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.942215 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.942235 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:48 crc kubenswrapper[4834]: I0121 14:31:48.942249 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:48Z","lastTransitionTime":"2026-01-21T14:31:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.045233 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.045272 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.045281 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.045297 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.045308 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.147552 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.147588 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.147600 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.147621 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.147632 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.250556 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.250639 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.250658 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.250684 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.250703 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.310828 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:42:17.847198864 +0000 UTC Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.325295 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.325353 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.325310 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.325420 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:49 crc kubenswrapper[4834]: E0121 14:31:49.325478 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:49 crc kubenswrapper[4834]: E0121 14:31:49.325526 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:49 crc kubenswrapper[4834]: E0121 14:31:49.325571 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:49 crc kubenswrapper[4834]: E0121 14:31:49.325689 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.353433 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.353481 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.353491 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.353529 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.353548 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.456072 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.456117 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.456128 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.456146 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.456157 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.559138 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.559183 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.559193 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.559214 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.559227 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.662788 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.662840 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.662864 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.662895 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.662920 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.766225 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.766271 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.766289 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.766312 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.766331 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.868779 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.868815 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.868824 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.868842 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.868854 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.972737 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.972807 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.972828 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.972861 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:49 crc kubenswrapper[4834]: I0121 14:31:49.972882 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:49Z","lastTransitionTime":"2026-01-21T14:31:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.075644 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.075715 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.075753 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.075785 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.075809 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:50Z","lastTransitionTime":"2026-01-21T14:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.178995 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.179039 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.179050 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.179076 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.179088 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:50Z","lastTransitionTime":"2026-01-21T14:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.282514 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.282573 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.282621 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.282646 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.282663 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:50Z","lastTransitionTime":"2026-01-21T14:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.311119 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:10:32.623787094 +0000 UTC Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.386503 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.386567 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.386586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.386612 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.386631 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:50Z","lastTransitionTime":"2026-01-21T14:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.491094 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.491209 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.491238 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.491267 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.491287 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:50Z","lastTransitionTime":"2026-01-21T14:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.595116 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.595178 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.595194 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.595214 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.595231 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:50Z","lastTransitionTime":"2026-01-21T14:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.698552 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.698604 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.698618 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.698637 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.698650 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:50Z","lastTransitionTime":"2026-01-21T14:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.802062 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.802130 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.802155 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.802189 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.802215 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:50Z","lastTransitionTime":"2026-01-21T14:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.904795 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.904846 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.904861 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.904879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:50 crc kubenswrapper[4834]: I0121 14:31:50.904895 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:50Z","lastTransitionTime":"2026-01-21T14:31:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.007885 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.007949 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.007959 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.008004 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.008016 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.110602 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.110651 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.110664 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.110682 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.110696 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.215319 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.215361 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.215375 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.215395 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.215408 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.311590 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:44:07.801076286 +0000 UTC Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.317324 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.317361 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.317372 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.317389 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.317401 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.323893 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.323907 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:51 crc kubenswrapper[4834]: E0121 14:31:51.324049 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.323918 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:51 crc kubenswrapper[4834]: E0121 14:31:51.324146 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.323907 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:51 crc kubenswrapper[4834]: E0121 14:31:51.324228 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:51 crc kubenswrapper[4834]: E0121 14:31:51.324294 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.420147 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.420197 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.420209 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.420230 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.420242 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.523305 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.523378 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.523398 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.523427 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.523453 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.626807 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.626851 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.626862 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.626879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.626890 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.730179 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.730215 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.730224 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.730240 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.730252 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.832902 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.832978 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.832995 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.833019 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.833033 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.935551 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.935599 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.935608 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.935624 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:51 crc kubenswrapper[4834]: I0121 14:31:51.935638 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:51Z","lastTransitionTime":"2026-01-21T14:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.041612 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.041693 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.041703 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.041723 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.041735 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.145031 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.145091 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.145109 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.145133 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.145151 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.248264 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.248599 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.248660 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.248796 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.248887 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.312711 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:00:22.903261061 +0000 UTC Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.351778 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.351825 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.351835 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.351852 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.351863 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.454855 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.454891 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.454900 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.454915 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.454925 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.557663 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.557710 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.557725 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.557746 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.557762 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.660297 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.660363 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.660375 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.660393 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.660405 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.762831 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.762880 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.762892 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.762913 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.762942 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.866074 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.866145 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.866155 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.866171 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.866181 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.969396 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.969440 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.969455 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.969471 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:52 crc kubenswrapper[4834]: I0121 14:31:52.969484 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:52Z","lastTransitionTime":"2026-01-21T14:31:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.072147 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.072181 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.072200 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.072223 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.072236 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.175147 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.175222 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.175240 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.175262 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.175280 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.277627 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.277689 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.277711 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.277732 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.277749 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.314461 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:27:22.057697497 +0000 UTC Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.324132 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.324220 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.324214 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:53 crc kubenswrapper[4834]: E0121 14:31:53.324313 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:53 crc kubenswrapper[4834]: E0121 14:31:53.324402 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.324457 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:53 crc kubenswrapper[4834]: E0121 14:31:53.324632 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:53 crc kubenswrapper[4834]: E0121 14:31:53.324761 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.380900 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.380983 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.380995 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.381012 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.381024 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.483978 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.484026 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.484035 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.484050 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.484061 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.586424 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.586537 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.586564 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.586595 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.586702 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.689879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.689919 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.689944 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.689962 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.689974 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.792312 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.792359 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.792371 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.792390 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.792404 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.894861 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.895302 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.895409 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.895513 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.895616 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.998497 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.998571 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.998876 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.998919 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:53 crc kubenswrapper[4834]: I0121 14:31:53.998976 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:53Z","lastTransitionTime":"2026-01-21T14:31:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.102392 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.102452 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.102467 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.102485 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.102500 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:54Z","lastTransitionTime":"2026-01-21T14:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.204739 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.204784 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.204798 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.204813 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.204823 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:54Z","lastTransitionTime":"2026-01-21T14:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.307848 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.308099 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.308143 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.308175 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.308194 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:54Z","lastTransitionTime":"2026-01-21T14:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.314974 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:47:35.257226735 +0000 UTC Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.338566 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.363876 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.387269 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.405155 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.411669 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.411740 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.411767 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.411797 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.411819 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:54Z","lastTransitionTime":"2026-01-21T14:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.419658 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.441554 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.463489 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.477335 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.491371 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.506370 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.518366 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.518437 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.518456 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.518486 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.518514 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:54Z","lastTransitionTime":"2026-01-21T14:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.522518 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.536090 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.548713 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.563627 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.579732 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.592096 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.608365 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.621172 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:54Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.621994 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.622043 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.622061 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.622088 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.622110 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:54Z","lastTransitionTime":"2026-01-21T14:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.724817 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.724893 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.724911 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.724962 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.724984 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:54Z","lastTransitionTime":"2026-01-21T14:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.827177 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.827236 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.827254 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.827280 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.827299 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:54Z","lastTransitionTime":"2026-01-21T14:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.930711 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.930780 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.930804 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.930830 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:54 crc kubenswrapper[4834]: I0121 14:31:54.930848 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:54Z","lastTransitionTime":"2026-01-21T14:31:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.033808 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.033874 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.033890 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.033917 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.033960 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.136626 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.136674 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.136686 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.136705 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.136714 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.239959 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.240027 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.240040 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.240056 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.240068 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.315822 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:13:27.446368702 +0000 UTC Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.324276 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.324319 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.324293 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.324431 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:55 crc kubenswrapper[4834]: E0121 14:31:55.324702 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:55 crc kubenswrapper[4834]: E0121 14:31:55.324821 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:55 crc kubenswrapper[4834]: E0121 14:31:55.324946 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:55 crc kubenswrapper[4834]: E0121 14:31:55.325108 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.343155 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.343232 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.343257 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.343287 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.343315 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.446130 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.446162 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.446171 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.446188 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.446199 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.549624 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.550040 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.550138 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.550231 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.550302 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.653200 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.653240 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.653338 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.653364 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.653375 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.756718 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.756793 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.756808 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.756845 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.756858 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.860463 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.860521 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.860537 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.860560 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.860575 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.963572 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.963649 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.963667 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.963692 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:55 crc kubenswrapper[4834]: I0121 14:31:55.963713 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:55Z","lastTransitionTime":"2026-01-21T14:31:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.067744 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.067851 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.067871 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.067903 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.067922 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:56Z","lastTransitionTime":"2026-01-21T14:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.171132 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.171168 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.171177 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.171192 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.171203 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:56Z","lastTransitionTime":"2026-01-21T14:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.278590 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.278637 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.278648 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.278666 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.278676 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:56Z","lastTransitionTime":"2026-01-21T14:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.316992 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:08:11.161125492 +0000 UTC Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.381655 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.381705 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.381716 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.381734 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.381745 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:56Z","lastTransitionTime":"2026-01-21T14:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.484770 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.484829 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.484850 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.484875 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.484894 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:56Z","lastTransitionTime":"2026-01-21T14:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.587386 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.587415 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.587423 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.587453 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.587464 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:56Z","lastTransitionTime":"2026-01-21T14:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.690048 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.690096 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.690138 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.690163 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.690179 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:56Z","lastTransitionTime":"2026-01-21T14:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.794772 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.794842 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.794855 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.794879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.794895 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:56Z","lastTransitionTime":"2026-01-21T14:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.898342 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.898399 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.898409 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.898429 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:56 crc kubenswrapper[4834]: I0121 14:31:56.898441 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:56Z","lastTransitionTime":"2026-01-21T14:31:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.001055 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.001128 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.001161 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.001180 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.001191 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.140792 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.140850 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.140860 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.140882 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.140895 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.244128 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.244180 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.244193 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.244217 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.244228 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.317208 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:21:35.352084696 +0000 UTC Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.325034 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.325110 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:57 crc kubenswrapper[4834]: E0121 14:31:57.325195 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.325111 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.325645 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:57 crc kubenswrapper[4834]: E0121 14:31:57.325728 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:57 crc kubenswrapper[4834]: E0121 14:31:57.325856 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:57 crc kubenswrapper[4834]: E0121 14:31:57.325986 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.326503 4834 scope.go:117] "RemoveContainer" containerID="50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18" Jan 21 14:31:57 crc kubenswrapper[4834]: E0121 14:31:57.326960 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.347369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.347404 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.347417 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.347435 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.347446 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.450413 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.450444 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.450452 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.450468 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.450476 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.553446 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.553488 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.553498 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.553514 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.553524 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.656389 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.656439 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.656450 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.656465 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.656474 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.761093 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.761176 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.761256 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.761313 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.761335 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.864819 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.864974 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.864995 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.865044 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.865063 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.968470 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.968528 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.968550 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.968574 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:57 crc kubenswrapper[4834]: I0121 14:31:57.968585 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:57Z","lastTransitionTime":"2026-01-21T14:31:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.070783 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.070847 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.070859 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.070875 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.070886 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.174069 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.174105 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.174118 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.174135 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.174148 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.277126 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.277172 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.277182 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.277208 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.277219 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.317819 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:27:11.461059395 +0000 UTC Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.380202 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.380270 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.380285 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.380299 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.380308 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.397612 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.397640 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.397649 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.397663 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.397673 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: E0121 14:31:58.411612 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.422475 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.422523 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.422535 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.422552 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.422566 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: E0121 14:31:58.440201 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.445353 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.445397 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.445411 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.445431 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.445446 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: E0121 14:31:58.459269 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.463355 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.463396 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.463408 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.463424 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.463435 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: E0121 14:31:58.474678 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.478383 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.478426 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.478437 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.478456 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.478468 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: E0121 14:31:58.494364 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:31:58Z is after 2025-08-24T17:21:41Z" Jan 21 14:31:58 crc kubenswrapper[4834]: E0121 14:31:58.494486 4834 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.496181 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.496207 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.496217 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.496232 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.496242 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.598579 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.598628 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.598638 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.598657 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.598668 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.702047 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.702104 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.702124 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.702149 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.702162 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.806172 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.806601 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.806679 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.806767 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.806841 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.914103 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.914147 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.914158 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.914174 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:58 crc kubenswrapper[4834]: I0121 14:31:58.914188 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:58Z","lastTransitionTime":"2026-01-21T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.017704 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.018181 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.018268 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.018368 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.018448 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.121613 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.121659 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.121668 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.121685 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.121698 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.224061 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.224103 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.224116 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.224138 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.224150 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.318742 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:52:02.960699961 +0000 UTC Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.324553 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.324686 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.324824 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:31:59 crc kubenswrapper[4834]: E0121 14:31:59.324824 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.324867 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:31:59 crc kubenswrapper[4834]: E0121 14:31:59.325113 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:31:59 crc kubenswrapper[4834]: E0121 14:31:59.326093 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:31:59 crc kubenswrapper[4834]: E0121 14:31:59.326365 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.326478 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.326520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.326531 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.326545 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.326555 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.429247 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.429303 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.429313 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.429328 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.429341 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.532300 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.532357 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.532369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.532386 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.532399 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.635974 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.636057 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.636073 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.636117 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.636137 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.739262 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.739366 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.739380 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.739418 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.739434 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.841898 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.841953 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.841965 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.841983 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.841994 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.944090 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.944470 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.944544 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.944647 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:31:59 crc kubenswrapper[4834]: I0121 14:31:59.944745 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:31:59Z","lastTransitionTime":"2026-01-21T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.047907 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.048247 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.048315 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.048392 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.048462 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.151662 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.151735 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.151746 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.151761 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.151770 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.254428 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.254477 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.254488 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.254505 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.254515 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.319781 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:33:18.980026592 +0000 UTC Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.356473 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.356519 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.356529 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.356543 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.356554 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.459357 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.459409 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.459420 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.459438 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.459450 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.561736 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.561787 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.561798 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.561816 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.561827 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.664774 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.664814 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.664825 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.664844 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.664856 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.767441 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.767503 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.767520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.767543 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.767560 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.870240 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.870279 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.870290 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.870308 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.870318 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.973135 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.973215 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.973228 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.973251 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:00 crc kubenswrapper[4834]: I0121 14:32:00.973266 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:00Z","lastTransitionTime":"2026-01-21T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.075537 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.075581 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.075591 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.075607 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.075618 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:01Z","lastTransitionTime":"2026-01-21T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.178454 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.178541 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.178557 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.178579 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.178597 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:01Z","lastTransitionTime":"2026-01-21T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.281397 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.281427 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.281438 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.281452 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.281460 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:01Z","lastTransitionTime":"2026-01-21T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.320152 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:42:20.313803016 +0000 UTC Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.324484 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.324517 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.324487 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:01 crc kubenswrapper[4834]: E0121 14:32:01.324607 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.324487 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:01 crc kubenswrapper[4834]: E0121 14:32:01.324698 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:01 crc kubenswrapper[4834]: E0121 14:32:01.324770 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:01 crc kubenswrapper[4834]: E0121 14:32:01.324839 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.383497 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.383542 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.383552 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.383568 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.383578 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:01Z","lastTransitionTime":"2026-01-21T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.486285 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.486321 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.486332 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.486351 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.486363 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:01Z","lastTransitionTime":"2026-01-21T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.511861 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:01 crc kubenswrapper[4834]: E0121 14:32:01.512071 4834 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:32:01 crc kubenswrapper[4834]: E0121 14:32:01.512157 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs podName:d31034df-9ceb-49b0-9ad5-334dcaa28fa4 nodeName:}" failed. No retries permitted until 2026-01-21 14:32:33.51213205 +0000 UTC m=+99.486481115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs") pod "network-metrics-daemon-dtqf2" (UID: "d31034df-9ceb-49b0-9ad5-334dcaa28fa4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.589004 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.589050 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.589059 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.589075 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.589087 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:01Z","lastTransitionTime":"2026-01-21T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.691717 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.691805 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.691819 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.691844 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.691859 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:01Z","lastTransitionTime":"2026-01-21T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.794409 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.794451 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.794463 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.794478 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.794488 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:01Z","lastTransitionTime":"2026-01-21T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.897564 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.897656 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.897684 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.897722 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:01 crc kubenswrapper[4834]: I0121 14:32:01.897749 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:01Z","lastTransitionTime":"2026-01-21T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.000178 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.000213 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.000223 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.000238 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.000249 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.102956 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.102989 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.103028 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.103045 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.103058 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.205985 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.206032 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.206043 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.206062 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.206073 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.309256 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.309322 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.309332 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.309348 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.309358 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.320579 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:03:07.414622781 +0000 UTC Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.412148 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.412223 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.412242 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.412288 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.412309 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.514804 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.514861 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.514874 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.514892 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.514905 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.617817 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.617853 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.617861 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.617874 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.617886 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.720970 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.721011 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.721021 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.721044 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.721054 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.790794 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/0.log" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.790868 4834 generic.go:334] "Generic (PLEG): container finished" podID="dbe1b4f9-f835-43ba-9496-a9e60af3b87f" containerID="5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788" exitCode=1 Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.790909 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gd9jh" event={"ID":"dbe1b4f9-f835-43ba-9496-a9e60af3b87f","Type":"ContainerDied","Data":"5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.791397 4834 scope.go:117] "RemoveContainer" containerID="5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.802390 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.818606 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.823451 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.823510 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.823520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.823534 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.823542 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.839964 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.852774 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.865840 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.876020 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.887834 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"2026-01-21T14:31:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f\\\\n2026-01-21T14:31:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f to /host/opt/cni/bin/\\\\n2026-01-21T14:31:17Z [verbose] multus-daemon started\\\\n2026-01-21T14:31:17Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:32:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.905900 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.920143 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.926749 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.926801 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.926813 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.926833 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.926847 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:02Z","lastTransitionTime":"2026-01-21T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.933272 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.943710 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.956494 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.968301 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.978389 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.988432 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:02 crc kubenswrapper[4834]: I0121 14:32:02.997689 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:02Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.011124 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.022526 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.029132 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.029183 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.029192 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.029209 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.029220 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.131272 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.131310 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.131318 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.131333 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.131342 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.233540 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.233576 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.233586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.233599 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.233609 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.320979 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:25:43.707990672 +0000 UTC Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.324327 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:03 crc kubenswrapper[4834]: E0121 14:32:03.324520 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.324642 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:03 crc kubenswrapper[4834]: E0121 14:32:03.324799 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.324684 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.324675 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:03 crc kubenswrapper[4834]: E0121 14:32:03.325019 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:03 crc kubenswrapper[4834]: E0121 14:32:03.325149 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.336047 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.336085 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.336095 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.336114 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.336125 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.438512 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.438771 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.438875 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.438971 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.439052 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.542043 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.543337 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.543426 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.543523 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.543607 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.646711 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.646752 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.646762 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.646778 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.646788 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.749988 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.750032 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.750042 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.750058 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.750071 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.795522 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/0.log" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.795583 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gd9jh" event={"ID":"dbe1b4f9-f835-43ba-9496-a9e60af3b87f","Type":"ContainerStarted","Data":"17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.807137 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.816497 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.833578 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.846587 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.852898 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.852948 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.852959 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.852975 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.852984 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.859995 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.872077 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.884524 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"2026-01-21T14:31:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f\\\\n2026-01-21T14:31:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f to /host/opt/cni/bin/\\\\n2026-01-21T14:31:17Z [verbose] multus-daemon started\\\\n2026-01-21T14:31:17Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:32:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.903880 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.917214 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.928470 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.944579 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.956978 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.957042 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.957056 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.957073 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.957088 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:03Z","lastTransitionTime":"2026-01-21T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.958612 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.972788 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.984312 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:03 crc kubenswrapper[4834]: I0121 14:32:03.996997 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:03Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.007196 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.020989 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.032164 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.060479 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.060539 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.060554 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.060575 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.060608 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.163570 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.163612 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.163624 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.163642 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.163654 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.266663 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.266703 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.266714 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.266730 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.266740 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.321364 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:16:00.356753421 +0000 UTC Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.338692 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.350807 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.369527 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.369559 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.369570 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.369586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.369598 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.370431 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.381564 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.392001 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.410130 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.423559 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.438898 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.450889 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.467087 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"2026-01-21T14:31:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f\\\\n2026-01-21T14:31:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f to /host/opt/cni/bin/\\\\n2026-01-21T14:31:17Z [verbose] multus-daemon started\\\\n2026-01-21T14:31:17Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:32:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.472753 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.472782 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.472790 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.472804 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.472812 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.481147 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.494599 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.503268 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.513353 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.525116 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.536774 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.548795 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.559428 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:04Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.574699 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.574729 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.574739 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.574752 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.574762 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.677687 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.677724 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.677733 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.677750 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.677759 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.780960 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.780998 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.781006 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.781023 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.781032 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.883650 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.883689 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.883699 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.883713 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.883723 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.986181 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.986303 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.986318 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.986332 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:04 crc kubenswrapper[4834]: I0121 14:32:04.986342 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:04Z","lastTransitionTime":"2026-01-21T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.088389 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.088430 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.088442 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.088456 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.088466 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:05Z","lastTransitionTime":"2026-01-21T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.191190 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.191225 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.191234 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.191246 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.191255 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:05Z","lastTransitionTime":"2026-01-21T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.293840 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.293879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.293888 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.293902 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.293912 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:05Z","lastTransitionTime":"2026-01-21T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.322423 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:21:28.67662952 +0000 UTC Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.323712 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.323740 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.323733 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:05 crc kubenswrapper[4834]: E0121 14:32:05.323825 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:05 crc kubenswrapper[4834]: E0121 14:32:05.324011 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.324134 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:05 crc kubenswrapper[4834]: E0121 14:32:05.324131 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:05 crc kubenswrapper[4834]: E0121 14:32:05.324223 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.395842 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.395881 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.395893 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.395911 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.395940 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:05Z","lastTransitionTime":"2026-01-21T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.497777 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.497813 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.497823 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.497836 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.497847 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:05Z","lastTransitionTime":"2026-01-21T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.600264 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.600302 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.600311 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.600327 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.600337 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:05Z","lastTransitionTime":"2026-01-21T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.702767 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.702890 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.702916 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.702954 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.702975 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:05Z","lastTransitionTime":"2026-01-21T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.804389 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.804422 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.804430 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.804444 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.804454 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:05Z","lastTransitionTime":"2026-01-21T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.906621 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.906659 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.906669 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.906683 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:05 crc kubenswrapper[4834]: I0121 14:32:05.906693 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:05Z","lastTransitionTime":"2026-01-21T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.009738 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.009781 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.009792 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.009808 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.009819 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.111990 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.112039 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.112055 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.112074 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.112088 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.214428 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.214471 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.214481 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.214494 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.214504 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.316362 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.316408 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.316416 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.316431 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.316441 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.323653 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:17:44.904693106 +0000 UTC Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.418338 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.418375 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.418384 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.418399 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.418410 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.520671 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.520711 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.520720 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.520734 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.520746 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.622937 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.622980 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.622993 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.623012 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.623021 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.724943 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.724976 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.724987 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.725005 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.725017 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.827132 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.827166 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.827173 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.827185 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.827195 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.929301 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.929345 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.929355 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.929373 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:06 crc kubenswrapper[4834]: I0121 14:32:06.929384 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:06Z","lastTransitionTime":"2026-01-21T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.031454 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.031490 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.031500 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.031513 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.031523 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.133470 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.133525 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.133535 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.133547 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.133557 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.235550 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.235583 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.235591 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.235603 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.235611 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.324422 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:01:31.622966532 +0000 UTC Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.324617 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.324641 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.324613 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.324617 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:07 crc kubenswrapper[4834]: E0121 14:32:07.325025 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:07 crc kubenswrapper[4834]: E0121 14:32:07.324727 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:07 crc kubenswrapper[4834]: E0121 14:32:07.325188 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:07 crc kubenswrapper[4834]: E0121 14:32:07.325217 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.337230 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.337293 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.337306 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.337321 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.337330 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.440069 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.440112 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.440124 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.440140 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.440153 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.542240 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.542299 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.542309 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.542324 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.542335 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.645539 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.645575 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.645584 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.645599 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.645608 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.748175 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.748241 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.748255 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.748295 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.748305 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.851026 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.851073 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.851085 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.851101 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.851112 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.953435 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.953500 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.953513 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.953532 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:07 crc kubenswrapper[4834]: I0121 14:32:07.953545 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:07Z","lastTransitionTime":"2026-01-21T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.056742 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.056788 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.056800 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.056818 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.056829 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.159304 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.159371 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.159387 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.159407 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.159418 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.262096 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.262145 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.262154 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.262168 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.262177 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.324974 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:13:03.49012754 +0000 UTC Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.365093 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.365134 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.365146 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.365166 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.365179 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.467426 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.467468 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.467477 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.467491 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.467500 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.527903 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.527960 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.527968 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.527981 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.527991 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: E0121 14:32:08.538104 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.540988 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.541018 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.541027 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.541040 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.541050 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: E0121 14:32:08.551317 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.554535 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.554586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.554602 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.554624 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.554644 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: E0121 14:32:08.569638 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.572801 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.572853 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.572864 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.572878 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.572886 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: E0121 14:32:08.583518 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.587295 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.587323 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.587332 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.587361 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.587371 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: E0121 14:32:08.601483 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:08Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:08 crc kubenswrapper[4834]: E0121 14:32:08.601608 4834 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.603580 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.603609 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.603619 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.603633 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.603641 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.706236 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.706279 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.706288 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.706303 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.706313 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.808907 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.808967 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.809004 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.809019 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.809028 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.912094 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.912179 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.912191 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.912213 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:08 crc kubenswrapper[4834]: I0121 14:32:08.912226 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:08Z","lastTransitionTime":"2026-01-21T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.014186 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.014253 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.014271 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.014305 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.014328 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.116496 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.116546 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.116558 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.116574 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.116585 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.219278 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.219343 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.219366 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.219394 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.219415 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.322621 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.322653 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.322662 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.322694 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.322703 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.323786 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.323817 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:09 crc kubenswrapper[4834]: E0121 14:32:09.323872 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.323900 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.323921 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:09 crc kubenswrapper[4834]: E0121 14:32:09.324039 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:09 crc kubenswrapper[4834]: E0121 14:32:09.324176 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:09 crc kubenswrapper[4834]: E0121 14:32:09.324366 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.325107 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:22:23.972435756 +0000 UTC Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.425736 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.425797 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.425836 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.425870 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.425890 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.529369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.529448 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.529472 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.529502 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.529526 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.632160 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.632219 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.632230 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.632249 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.632263 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.734643 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.734711 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.734734 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.734753 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.734767 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.837199 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.837257 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.837274 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.837299 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.837317 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.940379 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.940457 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.940478 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.940509 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:09 crc kubenswrapper[4834]: I0121 14:32:09.940530 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:09Z","lastTransitionTime":"2026-01-21T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.044061 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.044125 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.044147 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.044180 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.044202 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.146792 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.146832 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.146843 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.146859 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.146871 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.250000 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.250041 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.250053 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.250068 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.250078 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.326251 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 10:10:26.674142691 +0000 UTC Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.353603 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.353646 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.353654 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.353668 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.353677 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.458193 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.458254 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.458274 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.458302 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.458323 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.561610 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.561651 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.561660 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.561675 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.561687 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.664711 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.664782 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.664807 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.664836 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.664857 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.767819 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.767899 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.767977 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.768019 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.768155 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.871552 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.871615 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.871631 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.871656 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.871712 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.975081 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.975153 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.975178 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.975207 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:10 crc kubenswrapper[4834]: I0121 14:32:10.975231 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:10Z","lastTransitionTime":"2026-01-21T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.077678 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.077718 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.077736 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.077751 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.077762 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:11Z","lastTransitionTime":"2026-01-21T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.180116 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.180160 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.180170 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.180187 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.180198 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:11Z","lastTransitionTime":"2026-01-21T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.282258 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.282308 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.282321 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.282338 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.282351 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:11Z","lastTransitionTime":"2026-01-21T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.324117 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:11 crc kubenswrapper[4834]: E0121 14:32:11.324234 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.324263 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.324294 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.324343 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:11 crc kubenswrapper[4834]: E0121 14:32:11.324363 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:11 crc kubenswrapper[4834]: E0121 14:32:11.324430 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:11 crc kubenswrapper[4834]: E0121 14:32:11.324506 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.327192 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:52:04.917706217 +0000 UTC Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.385133 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.385165 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.385173 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.385187 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.385198 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:11Z","lastTransitionTime":"2026-01-21T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.488527 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.488574 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.488582 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.488597 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.488611 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:11Z","lastTransitionTime":"2026-01-21T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.590822 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.590863 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.590872 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.590887 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.590897 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:11Z","lastTransitionTime":"2026-01-21T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.693538 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.693580 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.693591 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.693606 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.693618 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:11Z","lastTransitionTime":"2026-01-21T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.796692 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.796734 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.796743 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.796758 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.796768 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:11Z","lastTransitionTime":"2026-01-21T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.899186 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.899234 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.899261 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.899280 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:11 crc kubenswrapper[4834]: I0121 14:32:11.899293 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:11Z","lastTransitionTime":"2026-01-21T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.001919 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.001980 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.001992 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.002011 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.002024 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.104852 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.104914 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.104954 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.104978 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.104993 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.207262 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.207301 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.207310 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.207323 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.207332 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.310008 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.310059 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.310067 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.310082 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.310091 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.325216 4834 scope.go:117] "RemoveContainer" containerID="50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.327350 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:06:21.786197272 +0000 UTC Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.412640 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.412682 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.412694 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.412710 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.412722 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.515321 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.515359 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.515369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.515385 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.515395 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.617530 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.617620 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.617671 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.617700 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.617722 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.720129 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.720486 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.720617 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.720785 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.720914 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.822586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.822619 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.822627 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.822640 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.822649 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.924820 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.924879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.924888 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.924901 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:12 crc kubenswrapper[4834]: I0121 14:32:12.924911 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:12Z","lastTransitionTime":"2026-01-21T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.026687 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.026723 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.026731 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.026746 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.026754 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.129293 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.129572 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.129677 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.129745 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.129810 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.232077 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.232136 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.232148 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.232166 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.232178 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.323513 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.323647 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:13 crc kubenswrapper[4834]: E0121 14:32:13.323790 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:13 crc kubenswrapper[4834]: E0121 14:32:13.323898 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.324090 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.324172 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:13 crc kubenswrapper[4834]: E0121 14:32:13.324349 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:13 crc kubenswrapper[4834]: E0121 14:32:13.324291 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.327718 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:50:59.385181341 +0000 UTC Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.334400 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.334435 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.334446 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.334459 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.334469 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.436157 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.436250 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.436261 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.436275 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.436286 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.538614 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.538647 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.538656 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.538670 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.538679 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.641275 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.641309 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.641326 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.641342 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.641353 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.743641 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.743666 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.743676 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.743689 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.743698 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.827485 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/2.log" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.829698 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.845218 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.845243 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.845287 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.845299 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.845307 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.947506 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.947536 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.947545 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.947558 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:13 crc kubenswrapper[4834]: I0121 14:32:13.947566 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:13Z","lastTransitionTime":"2026-01-21T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.050355 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.050397 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.050409 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.050425 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.050436 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.154146 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.154193 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.154205 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.154223 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.154238 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.256550 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.256591 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.256602 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.256617 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.256629 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.328028 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 16:01:46.138789008 +0000 UTC Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.336296 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.349056 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.358487 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.358717 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.358805 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.358881 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.358980 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.359678 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.383157 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.398024 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.419618 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.441472 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.456728 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.461149 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.461179 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.461188 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.461202 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.461210 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.475414 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.521433 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.539012 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.550738 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.562610 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.562801 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.562877 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.563000 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.563071 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.563918 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.580707 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"2026-01-21T14:31:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f\\\\n2026-01-21T14:31:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f to /host/opt/cni/bin/\\\\n2026-01-21T14:31:17Z [verbose] multus-daemon started\\\\n2026-01-21T14:31:17Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:32:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.599169 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.611420 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.626438 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.637731 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.665567 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.665599 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.665609 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.665624 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.665634 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.767825 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.767876 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.767890 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.767912 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.767947 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.833155 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.849454 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.862760 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.870549 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.870589 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.870598 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.870611 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.870621 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.878297 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"2026-01-21T14:31:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f\\\\n2026-01-21T14:31:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f to /host/opt/cni/bin/\\\\n2026-01-21T14:31:17Z [verbose] multus-daemon started\\\\n2026-01-21T14:31:17Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:32:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.895319 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.907379 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.920676 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.939009 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.952616 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.967075 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.972193 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.972270 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.972279 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.972294 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.972308 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:14Z","lastTransitionTime":"2026-01-21T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.979601 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:14 crc kubenswrapper[4834]: I0121 14:32:14.991100 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:14Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.003276 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.012462 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.023501 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.035001 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.044859 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.054911 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.064738 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.074371 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.074412 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.074424 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.074441 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.074454 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.176777 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.176827 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.176837 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.176855 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.176866 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.279367 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.279410 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.279420 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.279435 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.279445 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.324000 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.324062 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.324005 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:15 crc kubenswrapper[4834]: E0121 14:32:15.324150 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:15 crc kubenswrapper[4834]: E0121 14:32:15.324195 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:15 crc kubenswrapper[4834]: E0121 14:32:15.324252 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.324617 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:15 crc kubenswrapper[4834]: E0121 14:32:15.324784 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.328463 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:33:15.589469668 +0000 UTC Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.383058 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.383124 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.383146 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.383200 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.383228 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.485466 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.485505 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.485521 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.485541 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.485554 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.587452 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.587665 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.587745 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.587872 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.587971 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.690412 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.690444 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.690453 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.690466 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.690489 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.793098 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.793159 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.793171 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.793187 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.793199 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.837011 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/3.log" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.837860 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/2.log" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.840129 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" exitCode=1 Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.840229 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.840308 4834 scope.go:117] "RemoveContainer" containerID="50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.840983 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:32:15 crc kubenswrapper[4834]: E0121 14:32:15.841173 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.860546 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.873373 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.883757 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.894865 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.895317 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.895515 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.895526 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.895541 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.895551 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.906771 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"2026-01-21T14:31:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f\\\\n2026-01-21T14:31:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f to /host/opt/cni/bin/\\\\n2026-01-21T14:31:17Z [verbose] multus-daemon started\\\\n2026-01-21T14:31:17Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:32:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.925490 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50078c36b15fa40410f261fcc7e1c7be00ae915067b4ae107fbd51fa5df69d18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:31:40Z\\\",\\\"message\\\":\\\"nshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823405 6443 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823410 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-86g84\\\\nI0121 14:31:39.823415 6443 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0121 14:31:39.823420 6443 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-86g84 in node crc\\\\nF0121 14:31:39.823422 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not y\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:15Z\\\",\\\"message\\\":\\\"oMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:32:14.951351 6843 services_controller.go:451] Built service openshift-multus/multus-admission-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:32:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.935711 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.947508 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.960339 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.971436 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.981807 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.992964 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:15Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.997643 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.997678 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.997686 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.997700 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:15 crc kubenswrapper[4834]: I0121 14:32:15.997708 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:15Z","lastTransitionTime":"2026-01-21T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.003329 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.014079 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.023114 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.037026 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.047045 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.059862 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.099638 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.099675 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.099685 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.099700 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.099711 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:16Z","lastTransitionTime":"2026-01-21T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.202614 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.202664 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.202672 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.202686 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.202696 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:16Z","lastTransitionTime":"2026-01-21T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.305286 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.305357 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.305369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.305390 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.305401 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:16Z","lastTransitionTime":"2026-01-21T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.329074 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:20:55.46000682 +0000 UTC Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.335820 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.407867 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.407897 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.407905 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.407917 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.407949 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:16Z","lastTransitionTime":"2026-01-21T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.510417 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.510478 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.510488 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.510504 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.510517 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:16Z","lastTransitionTime":"2026-01-21T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.612413 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.612691 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.612705 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.612718 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.612726 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:16Z","lastTransitionTime":"2026-01-21T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.715613 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.715675 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.715691 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.715707 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.715719 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:16Z","lastTransitionTime":"2026-01-21T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.817777 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.817824 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.817832 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.817845 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.817854 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:16Z","lastTransitionTime":"2026-01-21T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.844374 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/3.log" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.847904 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:32:16 crc kubenswrapper[4834]: E0121 14:32:16.848116 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.862814 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.873955 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.888292 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.901128 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.912313 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.921158 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.921200 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.921210 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.921224 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.921233 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:16Z","lastTransitionTime":"2026-01-21T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.923197 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"2026-01-21T14:31:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f\\\\n2026-01-21T14:31:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f to /host/opt/cni/bin/\\\\n2026-01-21T14:31:17Z [verbose] multus-daemon started\\\\n2026-01-21T14:31:17Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:32:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.939467 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:15Z\\\",\\\"message\\\":\\\"oMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:32:14.951351 6843 services_controller.go:451] Built service openshift-multus/multus-admission-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:32:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.949444 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.961691 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.980128 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.990195 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:16 crc kubenswrapper[4834]: I0121 14:32:16.998837 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:16Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.009501 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.017625 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d37ad24-4387-4d85-bb15-50abc42dd27b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79d6d1f0b1be80b358d624746e6afaf9b8d13e4b7e75268f72ab35ae062967a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4aea0ba62e2303c60b7d7bdd51bd1223308834926d14a366bb741879a8d9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4aea0ba62e2303c60b7d7bdd51bd1223308834926d14a366bb741879a8d9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.023962 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.024003 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.024015 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.024030 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.024042 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.028597 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.038445 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.046867 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.057325 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.068301 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:17Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.126782 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.126821 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.126829 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.126841 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.126850 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.229178 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.229210 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.229218 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.229232 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.229240 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.323493 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.323535 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.323574 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.323535 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.323701 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.323765 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.323873 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.324031 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.329759 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:59:04.147145977 +0000 UTC Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.331215 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.331245 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.331257 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.331272 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.331286 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.433673 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.433717 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.433734 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.433751 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.433761 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.536779 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.536819 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.536830 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.536847 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.536858 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.639525 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.639563 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.639576 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.639593 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.639605 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.741425 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.741461 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.741472 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.741487 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.741498 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.844068 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.844119 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.844129 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.844149 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.844162 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.871211 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.871323 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.871418 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:21.8713741 +0000 UTC m=+147.845723145 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.871448 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.871473 4834 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.871521 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:21.871510734 +0000 UTC m=+147.845859779 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.871537 4834 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.871560 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:21.871554046 +0000 UTC m=+147.845903091 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.946726 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.946762 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.946771 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.946785 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.946799 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:17Z","lastTransitionTime":"2026-01-21T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.972391 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:17 crc kubenswrapper[4834]: I0121 14:32:17.972459 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.972581 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.972602 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.972614 4834 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.972659 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:21.972644098 +0000 UTC m=+147.946993143 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.972581 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.972690 4834 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.972702 4834 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:32:17 crc kubenswrapper[4834]: E0121 14:32:17.972742 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:21.97272827 +0000 UTC m=+147.947077315 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.049257 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.049293 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.049304 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.049318 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.049328 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.151872 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.151952 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.151964 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.151982 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.151994 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.254438 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.254491 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.254500 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.254520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.254529 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.330647 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:40:23.18925807 +0000 UTC Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.356353 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.356392 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.356402 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.356415 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.356423 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.458688 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.458726 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.458736 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.458751 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.458763 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.561537 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.561583 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.561596 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.561614 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.561634 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.654716 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.654764 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.654784 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.654804 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.654814 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: E0121 14:32:18.665676 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.668847 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.668889 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.668900 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.668915 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.668943 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: E0121 14:32:18.680204 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.683400 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.683440 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.683456 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.683477 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.683491 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: E0121 14:32:18.695134 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.698529 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.698567 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.698579 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.698595 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.698605 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: E0121 14:32:18.711785 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.715658 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.715689 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.715698 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.715711 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.715720 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: E0121 14:32:18.726189 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:18Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:18 crc kubenswrapper[4834]: E0121 14:32:18.726349 4834 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.727745 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.727778 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.727790 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.727806 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.727818 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.829923 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.829988 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.830005 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.830029 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.830039 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.932505 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.932543 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.932553 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.932567 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:18 crc kubenswrapper[4834]: I0121 14:32:18.932577 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:18Z","lastTransitionTime":"2026-01-21T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.034774 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.034813 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.034829 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.034843 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.034853 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.136863 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.136901 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.136910 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.136924 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.136983 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.239277 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.239326 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.239343 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.239359 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.239375 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.324055 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.324082 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.324208 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.324240 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:19 crc kubenswrapper[4834]: E0121 14:32:19.324316 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:19 crc kubenswrapper[4834]: E0121 14:32:19.325011 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:19 crc kubenswrapper[4834]: E0121 14:32:19.325848 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:19 crc kubenswrapper[4834]: E0121 14:32:19.326163 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.330831 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:27:31.015296438 +0000 UTC Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.341940 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.341979 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.341990 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.342004 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.342014 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.444607 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.444837 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.444952 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.445092 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.445187 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.548054 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.548428 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.548531 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.548670 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.548770 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.651529 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.651556 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.651566 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.651582 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.651594 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.753873 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.753914 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.753943 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.753960 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.753971 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.855780 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.855818 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.855828 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.855843 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.855854 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.958303 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.958538 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.958546 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.958564 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:19 crc kubenswrapper[4834]: I0121 14:32:19.958573 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:19Z","lastTransitionTime":"2026-01-21T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.061520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.061602 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.061627 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.061656 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.061682 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.163747 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.163783 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.163795 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.163813 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.163824 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.265673 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.265712 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.265721 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.265733 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.265741 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.331452 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:33:26.802485281 +0000 UTC Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.367985 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.368007 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.368016 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.368027 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.368035 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.469620 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.470113 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.470134 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.470156 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.470174 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.572550 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.572598 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.572613 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.572629 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.572642 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.675571 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.675603 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.675612 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.675626 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.675636 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.779544 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.779604 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.779630 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.779652 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.779666 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.881978 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.882054 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.882073 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.882099 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.882118 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.984591 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.984630 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.984639 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.984654 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:20 crc kubenswrapper[4834]: I0121 14:32:20.984664 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:20Z","lastTransitionTime":"2026-01-21T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.090739 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.090773 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.090788 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.090811 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.090824 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:21Z","lastTransitionTime":"2026-01-21T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.193901 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.193954 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.193965 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.193980 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.193993 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:21Z","lastTransitionTime":"2026-01-21T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.296329 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.296369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.296383 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.296397 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.296406 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:21Z","lastTransitionTime":"2026-01-21T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.324062 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.324131 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.324192 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:21 crc kubenswrapper[4834]: E0121 14:32:21.324187 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.324214 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:21 crc kubenswrapper[4834]: E0121 14:32:21.324281 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:21 crc kubenswrapper[4834]: E0121 14:32:21.324356 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:21 crc kubenswrapper[4834]: E0121 14:32:21.324409 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.332157 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:06:42.045359251 +0000 UTC Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.398098 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.398137 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.398148 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.398163 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.398172 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:21Z","lastTransitionTime":"2026-01-21T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.501042 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.501127 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.501149 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.501183 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.501204 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:21Z","lastTransitionTime":"2026-01-21T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.604046 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.604148 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.604163 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.604186 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.604197 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:21Z","lastTransitionTime":"2026-01-21T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.707139 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.707184 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.707197 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.707213 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.707224 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:21Z","lastTransitionTime":"2026-01-21T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.808983 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.809014 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.809023 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.809036 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.809045 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:21Z","lastTransitionTime":"2026-01-21T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.911702 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.911746 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.911756 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.911773 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:21 crc kubenswrapper[4834]: I0121 14:32:21.911784 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:21Z","lastTransitionTime":"2026-01-21T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.014586 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.014629 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.014642 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.014661 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.014685 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.116777 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.116813 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.116821 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.116834 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.116843 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.219008 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.219078 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.219108 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.219141 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.219162 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.321182 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.321216 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.321247 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.321318 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.321331 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.332891 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:38:52.056836248 +0000 UTC Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.424187 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.424241 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.424253 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.424272 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.424285 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.526810 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.526851 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.526860 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.526874 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.526884 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.629773 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.629815 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.629827 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.629842 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.629869 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.735510 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.735578 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.735592 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.735616 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.735633 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.838363 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.838404 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.838413 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.838430 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.838439 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.941299 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.941630 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.941740 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.941827 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:22 crc kubenswrapper[4834]: I0121 14:32:22.941948 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:22Z","lastTransitionTime":"2026-01-21T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.044856 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.044894 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.044906 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.044922 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.044960 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.147292 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.147339 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.147349 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.147364 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.147378 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.249962 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.250002 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.250011 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.250025 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.250036 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.324519 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:23 crc kubenswrapper[4834]: E0121 14:32:23.324864 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.324647 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:23 crc kubenswrapper[4834]: E0121 14:32:23.325097 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.324612 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.324655 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:23 crc kubenswrapper[4834]: E0121 14:32:23.325347 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:23 crc kubenswrapper[4834]: E0121 14:32:23.325434 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.333764 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:02:55.713201195 +0000 UTC Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.352696 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.352944 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.353072 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.353167 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.353246 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.456544 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.456595 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.456606 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.456620 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.456629 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.559224 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.559266 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.559275 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.559289 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.559299 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.661650 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.661693 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.661703 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.661718 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.661729 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.764702 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.764747 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.764758 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.764777 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.764788 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.866965 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.867005 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.867016 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.867031 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.867042 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.969911 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.969991 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.970004 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.970022 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:23 crc kubenswrapper[4834]: I0121 14:32:23.970036 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:23Z","lastTransitionTime":"2026-01-21T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.072900 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.072973 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.072987 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.073002 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.073011 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:24Z","lastTransitionTime":"2026-01-21T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.180015 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.180064 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.180076 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.180094 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.180106 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:24Z","lastTransitionTime":"2026-01-21T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.282848 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.282882 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.282891 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.282906 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.282916 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:24Z","lastTransitionTime":"2026-01-21T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.334403 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:13:20.306159554 +0000 UTC Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.345302 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.359584 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.381400 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:15Z\\\",\\\"message\\\":\\\"oMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:32:14.951351 6843 services_controller.go:451] Built service openshift-multus/multus-admission-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:32:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.385150 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.385184 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.385198 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.385215 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.385225 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:24Z","lastTransitionTime":"2026-01-21T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.391471 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.403775 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.422089 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.434795 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.446693 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.457062 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.470525 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"2026-01-21T14:31:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f\\\\n2026-01-21T14:31:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f to /host/opt/cni/bin/\\\\n2026-01-21T14:31:17Z [verbose] multus-daemon started\\\\n2026-01-21T14:31:17Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:32:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.481368 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.488144 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.488190 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.488202 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.488221 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.488233 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:24Z","lastTransitionTime":"2026-01-21T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.492278 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d37ad24-4387-4d85-bb15-50abc42dd27b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79d6d1f0b1be80b358d624746e6afaf9b8d13e4b7e75268f72ab35ae062967a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4aea0ba62e2303c60b7d7bdd51bd1223308834926d14a366bb741879a8d9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4aea0ba62e2303c60b7d7bdd51bd1223308834926d14a366bb741879a8d9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.505135 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.516099 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.529655 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.543507 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.554781 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.569469 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.578578 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:24Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.589806 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.589833 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.589841 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.589854 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.589865 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:24Z","lastTransitionTime":"2026-01-21T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.692306 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.692345 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.692355 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.692371 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.692382 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:24Z","lastTransitionTime":"2026-01-21T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.794863 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.794901 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.794913 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.794949 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.794962 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:24Z","lastTransitionTime":"2026-01-21T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.897288 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.897330 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.897349 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.897366 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:24 crc kubenswrapper[4834]: I0121 14:32:24.897376 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:24Z","lastTransitionTime":"2026-01-21T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.000304 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.000335 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.000344 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.000357 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.000366 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.102919 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.102988 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.103000 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.103019 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.103031 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.205310 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.205343 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.205357 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.205371 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.205380 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.308650 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.308713 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.308738 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.308761 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.308777 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.324503 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.324527 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.324598 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.324598 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:25 crc kubenswrapper[4834]: E0121 14:32:25.324716 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:25 crc kubenswrapper[4834]: E0121 14:32:25.324814 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:25 crc kubenswrapper[4834]: E0121 14:32:25.324959 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:25 crc kubenswrapper[4834]: E0121 14:32:25.324993 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.334720 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:52:36.954009224 +0000 UTC Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.410759 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.410800 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.410810 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.410827 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.410837 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.512849 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.512906 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.512920 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.512988 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.513005 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.616376 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.616425 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.616436 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.616456 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.616469 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.719636 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.719683 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.719692 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.719708 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.719719 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.821798 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.821832 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.821843 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.821860 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.821871 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.924385 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.924431 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.924442 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.924454 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:25 crc kubenswrapper[4834]: I0121 14:32:25.924463 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:25Z","lastTransitionTime":"2026-01-21T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.026518 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.026553 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.026564 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.026580 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.026590 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.129439 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.129497 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.129505 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.129520 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.129529 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.231335 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.231390 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.231402 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.231415 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.231425 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.333296 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.333345 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.333356 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.333372 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.333382 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.335441 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:12:52.948609758 +0000 UTC Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.435674 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.435727 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.435741 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.435788 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.435804 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.538269 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.538331 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.538340 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.538356 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.538388 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.641068 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.641145 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.641155 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.641171 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.641181 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.743879 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.743915 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.743924 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.743973 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.743984 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.846309 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.846356 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.846369 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.846384 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.846401 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.948654 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.948691 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.948701 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.948715 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:26 crc kubenswrapper[4834]: I0121 14:32:26.948724 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:26Z","lastTransitionTime":"2026-01-21T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.051415 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.051459 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.051470 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.051493 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.051507 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.154281 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.154331 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.154345 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.154363 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.154375 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.256142 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.256181 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.256189 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.256204 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.256214 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.324467 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.324519 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.324531 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:27 crc kubenswrapper[4834]: E0121 14:32:27.324592 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.324478 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:27 crc kubenswrapper[4834]: E0121 14:32:27.324713 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:27 crc kubenswrapper[4834]: E0121 14:32:27.324754 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:27 crc kubenswrapper[4834]: E0121 14:32:27.324799 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.336597 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:06:11.955995075 +0000 UTC Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.358237 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.358267 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.358275 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.358287 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.358298 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.460330 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.460384 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.460391 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.460405 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.460414 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.562706 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.562772 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.562787 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.562808 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.562824 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.666010 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.666049 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.666059 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.666076 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.666088 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.768411 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.768467 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.768481 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.768498 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.768510 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.871116 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.871155 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.871166 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.871185 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.871196 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.972973 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.973011 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.973021 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.973035 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:27 crc kubenswrapper[4834]: I0121 14:32:27.973045 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:27Z","lastTransitionTime":"2026-01-21T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.075589 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.075643 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.075654 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.075671 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.075684 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.177726 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.177770 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.177780 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.177795 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.177808 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.279704 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.279745 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.279753 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.279768 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.279777 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.324987 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:32:28 crc kubenswrapper[4834]: E0121 14:32:28.325166 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.337053 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:39:34.496641734 +0000 UTC Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.382286 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.382312 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.382320 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.382332 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.382341 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.484601 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.484644 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.484660 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.484677 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.484691 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.587839 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.587920 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.587940 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.587955 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.587965 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.690406 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.690464 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.690484 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.690507 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.690524 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.794264 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.794315 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.794327 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.794345 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.794355 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.897250 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.897360 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.897382 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.897408 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.897431 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.924368 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.924413 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.924421 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.924436 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.924445 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: E0121 14:32:28.935384 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.938339 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.938373 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.938388 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.938406 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.938424 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: E0121 14:32:28.948713 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.952055 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.952095 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.952104 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.952120 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.952130 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: E0121 14:32:28.962890 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.966573 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.966620 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.966633 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.966652 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.966666 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: E0121 14:32:28.978725 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.982230 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.982260 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.982268 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.982281 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.982291 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:28 crc kubenswrapper[4834]: E0121 14:32:28.996429 4834 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3596a4c-1b27-4372-98f4-5a8df0ab061a\\\",\\\"systemUUID\\\":\\\"b40d490c-65ad-4102-a086-7d2250750f42\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:28Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:28 crc kubenswrapper[4834]: E0121 14:32:28.996555 4834 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.999258 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.999309 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.999325 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.999346 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:28 crc kubenswrapper[4834]: I0121 14:32:28.999361 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:28Z","lastTransitionTime":"2026-01-21T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.101966 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.102004 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.102012 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.102043 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.102054 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:29Z","lastTransitionTime":"2026-01-21T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.204490 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.204554 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.204567 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.204584 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.204595 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:29Z","lastTransitionTime":"2026-01-21T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.307171 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.307228 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.307244 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.307265 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.307280 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:29Z","lastTransitionTime":"2026-01-21T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.323484 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:29 crc kubenswrapper[4834]: E0121 14:32:29.323597 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.323762 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:29 crc kubenswrapper[4834]: E0121 14:32:29.323806 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.323911 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:29 crc kubenswrapper[4834]: E0121 14:32:29.323988 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.324091 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:29 crc kubenswrapper[4834]: E0121 14:32:29.324160 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.337585 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:36:00.705747576 +0000 UTC Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.410234 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.410268 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.410279 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.410295 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.410307 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:29Z","lastTransitionTime":"2026-01-21T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.512688 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.512715 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.512723 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.512734 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.512742 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:29Z","lastTransitionTime":"2026-01-21T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.615509 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.615555 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.615571 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.615594 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.615608 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:29Z","lastTransitionTime":"2026-01-21T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.717914 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.718002 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.718018 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.718042 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.718059 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:29Z","lastTransitionTime":"2026-01-21T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.823998 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.824032 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.824043 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.824056 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.824070 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:29Z","lastTransitionTime":"2026-01-21T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.926847 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.926880 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.926888 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.926900 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:29 crc kubenswrapper[4834]: I0121 14:32:29.926911 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:29Z","lastTransitionTime":"2026-01-21T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.029500 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.029547 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.029563 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.029585 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.029601 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.132877 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.132964 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.132988 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.133016 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.133036 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.236185 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.236229 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.236245 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.236261 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.236273 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.337680 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:42:17.502847054 +0000 UTC Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.339345 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.339376 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.339408 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.339422 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.339431 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.441707 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.441781 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.441802 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.441830 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.441851 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.544673 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.544736 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.544751 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.544776 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.544788 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.647514 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.647563 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.647571 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.647584 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.647597 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.750214 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.750243 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.750274 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.750287 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.750297 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.852263 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.852316 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.852328 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.852387 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.852400 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.955101 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.955155 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.955172 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.955194 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:30 crc kubenswrapper[4834]: I0121 14:32:30.955211 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:30Z","lastTransitionTime":"2026-01-21T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.057748 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.057781 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.057798 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.057814 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.057832 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.160291 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.160338 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.160349 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.160365 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.160376 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.262808 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.262877 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.262900 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.262921 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.262971 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.324161 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.324167 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.324176 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.324202 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:31 crc kubenswrapper[4834]: E0121 14:32:31.324352 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:31 crc kubenswrapper[4834]: E0121 14:32:31.325138 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:31 crc kubenswrapper[4834]: E0121 14:32:31.325228 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:31 crc kubenswrapper[4834]: E0121 14:32:31.325337 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.338370 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:43:43.118817787 +0000 UTC Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.366251 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.366302 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.366322 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.366361 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.366390 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.468782 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.468817 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.468829 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.468846 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.468856 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.571518 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.571551 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.571561 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.571575 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.571585 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.674357 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.674391 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.674401 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.674416 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.674427 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.777225 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.777261 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.777269 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.777288 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.777298 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.883091 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.883144 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.883166 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.883184 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.883196 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.987028 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.987164 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.987471 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.988381 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:31 crc kubenswrapper[4834]: I0121 14:32:31.988404 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:31Z","lastTransitionTime":"2026-01-21T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.091393 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.091439 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.091451 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.091467 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.091478 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:32Z","lastTransitionTime":"2026-01-21T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.193488 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.193552 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.193567 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.193582 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.193620 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:32Z","lastTransitionTime":"2026-01-21T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.296280 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.296331 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.296339 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.296352 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.296360 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:32Z","lastTransitionTime":"2026-01-21T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.339084 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:52:48.532501014 +0000 UTC Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.398828 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.398894 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.398904 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.398917 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.398940 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:32Z","lastTransitionTime":"2026-01-21T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.501442 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.501483 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.501493 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.501507 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.501519 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:32Z","lastTransitionTime":"2026-01-21T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.604825 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.604886 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.604900 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.604921 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.604967 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:32Z","lastTransitionTime":"2026-01-21T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.707133 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.707178 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.707190 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.707204 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.707217 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:32Z","lastTransitionTime":"2026-01-21T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.810069 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.810107 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.810115 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.810130 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.810140 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:32Z","lastTransitionTime":"2026-01-21T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.913238 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.913277 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.913289 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.913317 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:32 crc kubenswrapper[4834]: I0121 14:32:32.913329 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:32Z","lastTransitionTime":"2026-01-21T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.015884 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.015984 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.016009 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.016038 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.016063 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.118692 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.118731 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.118740 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.118753 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.118762 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.220406 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.220479 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.220498 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.220523 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.220546 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.323527 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.323569 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.323649 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:33 crc kubenswrapper[4834]: E0121 14:32:33.323680 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.323754 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.323805 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.323847 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.323865 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: E0121 14:32:33.323862 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.323887 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: E0121 14:32:33.323901 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.323908 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: E0121 14:32:33.324080 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.340215 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:30:18.065575986 +0000 UTC Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.425783 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.425821 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.425832 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.425846 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.425868 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.525209 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:33 crc kubenswrapper[4834]: E0121 14:32:33.525381 4834 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:32:33 crc kubenswrapper[4834]: E0121 14:32:33.525465 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs podName:d31034df-9ceb-49b0-9ad5-334dcaa28fa4 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:37.525451558 +0000 UTC m=+163.499800603 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs") pod "network-metrics-daemon-dtqf2" (UID: "d31034df-9ceb-49b0-9ad5-334dcaa28fa4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.527890 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.527922 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.527948 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.527963 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.527973 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.629860 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.629896 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.629906 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.629921 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.629949 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.732256 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.732300 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.732311 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.732325 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.732334 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.835099 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.835133 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.835144 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.835163 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.835175 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.937556 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.937597 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.937608 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.937625 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:33 crc kubenswrapper[4834]: I0121 14:32:33.937636 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:33Z","lastTransitionTime":"2026-01-21T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.040911 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.041018 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.041037 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.041066 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.041088 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.143578 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.143666 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.143685 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.143713 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.143747 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.247297 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.247389 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.247416 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.247454 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.247482 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.340433 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:22:04.333257753 +0000 UTC Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.340525 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.349768 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.349817 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.349831 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.349853 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.349870 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.357295 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a06b9a83235ac550b3920fe82b41ce2a9152f1e7c4fc41d6aed15db9fd51548b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.373699 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26b2a8d601114d5ad3764c4ff665d0542035bfabe4e93299061351ac544bf1c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.392068 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.408226 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjx4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b767bc6-35ad-425a-a3b7-09783d972cf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad3ea1938fcc54df7804151433cd4dc5b291d795f58871ff7c3e6df8c9c7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p8x6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjx4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.424388 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-66jlt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8efbee9-2d1d-473f-ad38-b10d84821e23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33cce0a6c033d1be76308cdb9cc4c83972ed936729c6c6d96f786ad2de8f89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0cd0f2b1f969ecdf087548cd9fa9871fa822fa160c881d4ff4eb6743a8d6be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f332edc44366aa030e45417dbe28b408708792f8b7308f5bf0f1533723fcacb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://970d5f9eab1f7a3417f0fb0ead16d3183b7d8a3a28b4ee151da7ba00fe5851c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9cc07b1b9257b672b90c5c302df0d098e708a509d078c48e3fe2005ba31eb8f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://143ff030bf94bc7680edfc500fc901b29e30179c856c91ee2bab5f2393088e43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d34a4e18844d1145075d066ad40f6c15adbb3559d181051b6c82ff4d2e226f38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbtm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-66jlt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.435981 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4edb8fc0-f716-4a40-8028-fc796a8804bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccfeabfb8850af7e7ab5d639cc5cdbe5aab3098242eda2d5028f649faea8ebcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df5b8b59b60888e8b888c83ddef89c3b805140eb66233fe7f3f453fa1d9169a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn5pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-95s5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.447768 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a83decc6-0d6d-406e-9467-01627b8ba630\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84be8bca4b6fcebd57d858713eb83f5f7e3f279df94616819588820162131adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://203538b6d7bb3e906d079caa680b1cca86d1abd76bd903402382e5b2a78dc429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c0fde3a7010a1826136e215bee57fb119d1a9304dc80056924769a16708750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cda4f38ffd13bce006ac9d7421410ecdfb91bb85247fc6f94b0546e49decbb7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.451562 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.451600 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.451608 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.451621 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.451653 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.471065 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbdc2b29-e6b1-455f-876d-653873b489e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9faac2ae97b78df6ef4a9f70055c2d92c3a865a42b2d6d4870257d5d6145803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dd88d2ca0a08b03759d25fdc426fd07d8d805f8f98fe81854a254364dc6f454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb47b12ba8ddc165e892ae85ebb058ba433f7e72bad2250d6692411dedcc6636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771a484e665742f9b3f804c0ecee95e862ae93c7251565b6bab768c6eb17be7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d14d54bac3eb2a067ca8cf32c81c50e218c98a5b72547cdfc4a8f91618ce94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09efd683f819b6cbbe17ca2eb8f6afada644f001422c2562f11a8cb97ccc6c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0370a27f2c5b25ebd9bc33a1155a8eb39925e73e2cc8a90691e998af4abe86ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d5c12328124d72993d03a20ade9f2c0bb6d633da94773658fbb3acc091ff9e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.484518 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"449516f3-6736-49f1-b41e-cc3702440174\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.496042 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80992bb440a42e3fdee1db22017c315c236d3a76b64c126db921cab26718dfcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bec9c16b6c197c2ec5566842762ce4290c9ab74b65055f8806cb80f716f3a89a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.506125 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9d51eb-93f7-4c89-8c91-258f908c766d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08a6f7fdd406184879ee05698b1229610dbaeafc1b0620f639dd28df2fb8569d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5snc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86g84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.516704 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gd9jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbe1b4f9-f835-43ba-9496-a9e60af3b87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:02Z\\\",\\\"message\\\":\\\"2026-01-21T14:31:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f\\\\n2026-01-21T14:31:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_10b9b755-756f-4828-b987-d83a8132d92f to /host/opt/cni/bin/\\\\n2026-01-21T14:31:17Z [verbose] multus-daemon started\\\\n2026-01-21T14:31:17Z [verbose] Readiness Indicator file check\\\\n2026-01-21T14:32:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:32:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xhj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gd9jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.532815 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b3931d0-e57b-457f-94da-b56c92b40090\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T14:32:15Z\\\",\\\"message\\\":\\\"oMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/redhat-operators]} name:Service_openshift-marketplace/redhat-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 14:32:14.951351 6843 services_controller.go:451] Built service openshift-multus/multus-admission-controller cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T14:32:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:31:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:31:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmwbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6qwpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.542270 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8vs7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtqf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.553279 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.553316 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.553325 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.553341 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.553352 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.555246 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ed7ef3d-f6e9-41e7-888b-664402a34c31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7329f45f86f0a468627a0f40c4ec27e6f0510cb480ecfa556b49f6941afe4b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://686bde4fe1c6445f4a406cf6f6ee74027a333feb87188ef79d726adf13216a79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d02cd91c2b59108979b13ad64d45b872228bd6ef8de488a23f936ccdb7aa19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.569913 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d37ad24-4387-4d85-bb15-50abc42dd27b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:30:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d79d6d1f0b1be80b358d624746e6afaf9b8d13e4b7e75268f72ab35ae062967a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:30:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4aea0ba62e2303c60b7d7bdd51bd1223308834926d14a366bb741879a8d9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b4aea0ba62e2303c60b7d7bdd51bd1223308834926d14a366bb741879a8d9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T14:30:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T14:30:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:30:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.586496 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.600598 4834 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8stvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"868b7692-0771-42e1-8bfc-1882f6204823\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T14:31:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://385f0fc3c633c32183787417b1f5721fc58cb5900c1e7a46d71798abc494e802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T14:31:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jbh9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T14:31:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8stvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T14:32:34Z is after 2025-08-24T17:21:41Z" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.656855 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.656913 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.656974 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.657021 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.657048 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.759200 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.759259 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.759268 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.759282 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.759291 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.862574 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.862645 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.862667 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.862699 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.862720 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.965108 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.965170 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.965187 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.965214 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:34 crc kubenswrapper[4834]: I0121 14:32:34.965231 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:34Z","lastTransitionTime":"2026-01-21T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.067699 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.067748 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.067759 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.067777 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.067790 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.170686 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.170723 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.170734 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.170748 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.170759 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.273382 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.273422 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.273433 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.273467 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.273482 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.324444 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.324592 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.324568 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.324810 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:35 crc kubenswrapper[4834]: E0121 14:32:35.324813 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:35 crc kubenswrapper[4834]: E0121 14:32:35.325021 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:35 crc kubenswrapper[4834]: E0121 14:32:35.325068 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:35 crc kubenswrapper[4834]: E0121 14:32:35.325180 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.340849 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 21:13:32.370025695 +0000 UTC Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.376543 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.376615 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.376637 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.376668 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.376691 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.480253 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.480314 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.480337 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.480367 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.480389 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.583651 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.583691 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.583704 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.583719 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.583733 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.689107 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.689188 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.689213 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.689246 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.689271 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.791833 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.791872 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.791883 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.791898 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.791910 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.894757 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.894819 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.894888 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.894918 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.894967 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.997751 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.997812 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.997831 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.997853 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:35 crc kubenswrapper[4834]: I0121 14:32:35.997871 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:35Z","lastTransitionTime":"2026-01-21T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.100446 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.100672 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.100797 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.100909 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.101038 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:36Z","lastTransitionTime":"2026-01-21T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.204081 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.204111 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.204119 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.204135 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.204143 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:36Z","lastTransitionTime":"2026-01-21T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.308432 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.308497 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.308518 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.308551 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.308575 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:36Z","lastTransitionTime":"2026-01-21T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.341574 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:39:50.209845224 +0000 UTC Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.411418 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.411716 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.411871 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.411988 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.412085 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:36Z","lastTransitionTime":"2026-01-21T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.516152 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.516257 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.516283 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.516323 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.516352 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:36Z","lastTransitionTime":"2026-01-21T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.618758 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.618794 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.618804 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.618819 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.618830 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:36Z","lastTransitionTime":"2026-01-21T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.720843 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.720892 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.720903 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.720918 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.720953 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:36Z","lastTransitionTime":"2026-01-21T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.823631 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.823704 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.823715 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.823730 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.823740 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:36Z","lastTransitionTime":"2026-01-21T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.926040 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.926081 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.926089 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.926102 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:36 crc kubenswrapper[4834]: I0121 14:32:36.926110 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:36Z","lastTransitionTime":"2026-01-21T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.029443 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.029475 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.029485 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.029498 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.029507 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.132566 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.132637 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.132646 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.132662 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.132687 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.235433 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.235492 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.235505 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.235525 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.235537 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.324817 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.324888 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.325006 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.325146 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:37 crc kubenswrapper[4834]: E0121 14:32:37.325186 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:37 crc kubenswrapper[4834]: E0121 14:32:37.325385 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:37 crc kubenswrapper[4834]: E0121 14:32:37.325427 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:37 crc kubenswrapper[4834]: E0121 14:32:37.325486 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.339035 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.339067 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.339076 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.339088 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.339098 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.342382 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:13:04.956412553 +0000 UTC Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.441057 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.441133 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.441156 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.441184 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.441205 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.543592 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.543637 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.543650 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.543668 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.543681 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.646035 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.646362 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.646502 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.646646 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.646776 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.749070 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.749105 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.749113 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.749129 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.749138 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.851408 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.851644 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.851751 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.851834 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.851902 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.954142 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.954454 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.954544 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.954641 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:37 crc kubenswrapper[4834]: I0121 14:32:37.954720 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:37Z","lastTransitionTime":"2026-01-21T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.056673 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.056717 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.056732 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.056747 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.056757 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.159218 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.159258 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.159270 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.159283 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.159291 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.261615 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.261640 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.261649 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.261661 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.261669 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.342515 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:22:52.964861677 +0000 UTC Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.364138 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.364186 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.364199 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.364216 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.364233 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.466483 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.466550 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.466565 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.466584 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.466597 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.568490 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.568765 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.568841 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.568905 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.569012 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.670851 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.670895 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.670913 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.670970 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.670986 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.773992 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.774056 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.774067 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.774082 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.774093 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.876222 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.876260 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.876285 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.876298 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.876307 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.978827 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.978907 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.978977 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.978999 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:38 crc kubenswrapper[4834]: I0121 14:32:38.979016 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:38Z","lastTransitionTime":"2026-01-21T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.081548 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.081594 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.081602 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.081618 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.081630 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:39Z","lastTransitionTime":"2026-01-21T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.184129 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.184213 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.184238 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.184266 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.184286 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:39Z","lastTransitionTime":"2026-01-21T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.287096 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.287199 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.287218 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.287238 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.287255 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:39Z","lastTransitionTime":"2026-01-21T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.323570 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.323617 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.323625 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.323587 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:39 crc kubenswrapper[4834]: E0121 14:32:39.323729 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:39 crc kubenswrapper[4834]: E0121 14:32:39.323825 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:39 crc kubenswrapper[4834]: E0121 14:32:39.323974 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:39 crc kubenswrapper[4834]: E0121 14:32:39.324076 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.331200 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.331256 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.331293 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.331322 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.331347 4834 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T14:32:39Z","lastTransitionTime":"2026-01-21T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.344225 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:50:49.720905425 +0000 UTC Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.389399 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km"] Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.389895 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.391812 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.391992 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.392258 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.392593 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.462459 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=87.462438342 podStartE2EDuration="1m27.462438342s" podCreationTimestamp="2026-01-21 14:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.460176459 +0000 UTC m=+105.434525504" watchObservedRunningTime="2026-01-21 14:32:39.462438342 +0000 UTC m=+105.436787387" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.463124 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.463116384 podStartE2EDuration="55.463116384s" podCreationTimestamp="2026-01-21 14:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.426961114 +0000 UTC m=+105.401310169" watchObservedRunningTime="2026-01-21 14:32:39.463116384 +0000 UTC m=+105.437465429" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.492268 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a44a777a-9cf7-4d11-95d9-3511d918ae22-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.492309 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a44a777a-9cf7-4d11-95d9-3511d918ae22-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.492370 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a44a777a-9cf7-4d11-95d9-3511d918ae22-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.492387 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a44a777a-9cf7-4d11-95d9-3511d918ae22-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.492402 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a44a777a-9cf7-4d11-95d9-3511d918ae22-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.493770 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.493750976 podStartE2EDuration="1m25.493750976s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.479244951 +0000 UTC m=+105.453594006" watchObservedRunningTime="2026-01-21 14:32:39.493750976 +0000 UTC m=+105.468100021" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.507479 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podStartSLOduration=84.507447085 podStartE2EDuration="1m24.507447085s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.507374823 +0000 UTC m=+105.481723898" watchObservedRunningTime="2026-01-21 14:32:39.507447085 +0000 UTC m=+105.481796130" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.550200 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gd9jh" podStartSLOduration=84.550178706 podStartE2EDuration="1m24.550178706s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.528733907 +0000 UTC m=+105.503082952" watchObservedRunningTime="2026-01-21 14:32:39.550178706 +0000 UTC m=+105.524527751" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.568141 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.56812391 podStartE2EDuration="1m25.56812391s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.568086659 +0000 UTC m=+105.542435714" watchObservedRunningTime="2026-01-21 14:32:39.56812391 +0000 UTC m=+105.542472955" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.580655 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.580640452 podStartE2EDuration="23.580640452s" podCreationTimestamp="2026-01-21 14:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.580176477 +0000 UTC m=+105.554525522" watchObservedRunningTime="2026-01-21 14:32:39.580640452 +0000 UTC m=+105.554989497" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.592807 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a44a777a-9cf7-4d11-95d9-3511d918ae22-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.592844 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a44a777a-9cf7-4d11-95d9-3511d918ae22-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.592862 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a44a777a-9cf7-4d11-95d9-3511d918ae22-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.592883 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a44a777a-9cf7-4d11-95d9-3511d918ae22-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.592897 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a44a777a-9cf7-4d11-95d9-3511d918ae22-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.593178 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a44a777a-9cf7-4d11-95d9-3511d918ae22-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.593384 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a44a777a-9cf7-4d11-95d9-3511d918ae22-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.594826 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a44a777a-9cf7-4d11-95d9-3511d918ae22-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.600250 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a44a777a-9cf7-4d11-95d9-3511d918ae22-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.608404 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a44a777a-9cf7-4d11-95d9-3511d918ae22-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gb2km\" (UID: \"a44a777a-9cf7-4d11-95d9-3511d918ae22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.625337 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8stvm" podStartSLOduration=85.625305245 podStartE2EDuration="1m25.625305245s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.609863809 +0000 UTC m=+105.584212854" watchObservedRunningTime="2026-01-21 14:32:39.625305245 +0000 UTC m=+105.599654290" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.683151 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jjx4h" podStartSLOduration=85.683099337 podStartE2EDuration="1m25.683099337s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.681462265 +0000 UTC m=+105.655811330" watchObservedRunningTime="2026-01-21 14:32:39.683099337 +0000 UTC m=+105.657448382" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.709503 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.715911 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-66jlt" podStartSLOduration=84.715894679 podStartE2EDuration="1m24.715894679s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.715166916 +0000 UTC m=+105.689515951" watchObservedRunningTime="2026-01-21 14:32:39.715894679 +0000 UTC m=+105.690243734" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.738453 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-95s5h" podStartSLOduration=84.738424581 podStartE2EDuration="1m24.738424581s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.737653757 +0000 UTC m=+105.712002802" watchObservedRunningTime="2026-01-21 14:32:39.738424581 +0000 UTC m=+105.712773666" Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.914891 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" event={"ID":"a44a777a-9cf7-4d11-95d9-3511d918ae22","Type":"ContainerStarted","Data":"6d45cf36c457a3b5ce4f2d151ea4d968cc4cfcf15413e6e8b7caae6726b938b3"} Jan 21 14:32:39 crc kubenswrapper[4834]: I0121 14:32:39.915379 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" event={"ID":"a44a777a-9cf7-4d11-95d9-3511d918ae22","Type":"ContainerStarted","Data":"b723a75161858d740936f7d53ba356a04b9f1f5f470c618a77dc97001018a130"} Jan 21 14:32:40 crc kubenswrapper[4834]: I0121 14:32:40.345031 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:16:00.487893345 +0000 UTC Jan 21 14:32:40 crc kubenswrapper[4834]: I0121 14:32:40.345112 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 14:32:40 crc kubenswrapper[4834]: I0121 14:32:40.353578 4834 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 14:32:41 crc kubenswrapper[4834]: I0121 14:32:41.324640 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:41 crc kubenswrapper[4834]: I0121 14:32:41.324781 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:41 crc kubenswrapper[4834]: I0121 14:32:41.324920 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:41 crc kubenswrapper[4834]: I0121 14:32:41.324951 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:41 crc kubenswrapper[4834]: E0121 14:32:41.325131 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:41 crc kubenswrapper[4834]: E0121 14:32:41.325210 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:41 crc kubenswrapper[4834]: E0121 14:32:41.325306 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:41 crc kubenswrapper[4834]: E0121 14:32:41.325371 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:42 crc kubenswrapper[4834]: I0121 14:32:42.324646 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:32:42 crc kubenswrapper[4834]: E0121 14:32:42.324813 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" Jan 21 14:32:43 crc kubenswrapper[4834]: I0121 14:32:43.324370 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:43 crc kubenswrapper[4834]: I0121 14:32:43.324446 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:43 crc kubenswrapper[4834]: I0121 14:32:43.324483 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:43 crc kubenswrapper[4834]: I0121 14:32:43.324391 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:43 crc kubenswrapper[4834]: E0121 14:32:43.324671 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:43 crc kubenswrapper[4834]: E0121 14:32:43.324823 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:43 crc kubenswrapper[4834]: E0121 14:32:43.324873 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:43 crc kubenswrapper[4834]: E0121 14:32:43.325000 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:45 crc kubenswrapper[4834]: I0121 14:32:45.324548 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:45 crc kubenswrapper[4834]: I0121 14:32:45.324597 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:45 crc kubenswrapper[4834]: I0121 14:32:45.324558 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:45 crc kubenswrapper[4834]: I0121 14:32:45.324571 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:45 crc kubenswrapper[4834]: E0121 14:32:45.324695 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:45 crc kubenswrapper[4834]: E0121 14:32:45.324897 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:45 crc kubenswrapper[4834]: E0121 14:32:45.324977 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:45 crc kubenswrapper[4834]: E0121 14:32:45.324908 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:47 crc kubenswrapper[4834]: I0121 14:32:47.324523 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:47 crc kubenswrapper[4834]: I0121 14:32:47.324566 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:47 crc kubenswrapper[4834]: I0121 14:32:47.324524 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:47 crc kubenswrapper[4834]: I0121 14:32:47.324620 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:47 crc kubenswrapper[4834]: E0121 14:32:47.324760 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:47 crc kubenswrapper[4834]: E0121 14:32:47.324830 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:47 crc kubenswrapper[4834]: E0121 14:32:47.324962 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:47 crc kubenswrapper[4834]: E0121 14:32:47.325057 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:48 crc kubenswrapper[4834]: I0121 14:32:48.944063 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/1.log" Jan 21 14:32:48 crc kubenswrapper[4834]: I0121 14:32:48.944953 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/0.log" Jan 21 14:32:48 crc kubenswrapper[4834]: I0121 14:32:48.944991 4834 generic.go:334] "Generic (PLEG): container finished" podID="dbe1b4f9-f835-43ba-9496-a9e60af3b87f" containerID="17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f" exitCode=1 Jan 21 14:32:48 crc kubenswrapper[4834]: I0121 14:32:48.945024 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gd9jh" event={"ID":"dbe1b4f9-f835-43ba-9496-a9e60af3b87f","Type":"ContainerDied","Data":"17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f"} Jan 21 14:32:48 crc kubenswrapper[4834]: I0121 14:32:48.945063 4834 scope.go:117] "RemoveContainer" containerID="5a44f2a579868fadd671d45c7fc847f937acebaefb322917286a28f7b992e788" Jan 21 14:32:48 crc kubenswrapper[4834]: I0121 14:32:48.945516 4834 scope.go:117] "RemoveContainer" containerID="17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f" Jan 21 14:32:48 crc kubenswrapper[4834]: E0121 14:32:48.945944 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gd9jh_openshift-multus(dbe1b4f9-f835-43ba-9496-a9e60af3b87f)\"" pod="openshift-multus/multus-gd9jh" podUID="dbe1b4f9-f835-43ba-9496-a9e60af3b87f" Jan 21 14:32:48 crc kubenswrapper[4834]: I0121 14:32:48.969919 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gb2km" podStartSLOduration=94.969878211 podStartE2EDuration="1m34.969878211s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:32:39.930196922 +0000 UTC m=+105.904545967" watchObservedRunningTime="2026-01-21 14:32:48.969878211 +0000 UTC m=+114.944227256" Jan 21 14:32:49 crc kubenswrapper[4834]: I0121 14:32:49.323598 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:49 crc kubenswrapper[4834]: I0121 14:32:49.323782 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:49 crc kubenswrapper[4834]: E0121 14:32:49.324171 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:49 crc kubenswrapper[4834]: I0121 14:32:49.323862 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:49 crc kubenswrapper[4834]: I0121 14:32:49.323817 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:49 crc kubenswrapper[4834]: E0121 14:32:49.324265 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:49 crc kubenswrapper[4834]: E0121 14:32:49.324505 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:49 crc kubenswrapper[4834]: E0121 14:32:49.324250 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:49 crc kubenswrapper[4834]: I0121 14:32:49.952417 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/1.log" Jan 21 14:32:51 crc kubenswrapper[4834]: I0121 14:32:51.324391 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:51 crc kubenswrapper[4834]: I0121 14:32:51.324426 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:51 crc kubenswrapper[4834]: E0121 14:32:51.324498 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:51 crc kubenswrapper[4834]: I0121 14:32:51.324583 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:51 crc kubenswrapper[4834]: E0121 14:32:51.324597 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:51 crc kubenswrapper[4834]: I0121 14:32:51.324436 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:51 crc kubenswrapper[4834]: E0121 14:32:51.324735 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:51 crc kubenswrapper[4834]: E0121 14:32:51.325126 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:53 crc kubenswrapper[4834]: I0121 14:32:53.323736 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:53 crc kubenswrapper[4834]: I0121 14:32:53.323790 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:53 crc kubenswrapper[4834]: E0121 14:32:53.323879 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:53 crc kubenswrapper[4834]: I0121 14:32:53.324006 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:53 crc kubenswrapper[4834]: E0121 14:32:53.324123 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:53 crc kubenswrapper[4834]: I0121 14:32:53.324304 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:53 crc kubenswrapper[4834]: E0121 14:32:53.324395 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:53 crc kubenswrapper[4834]: E0121 14:32:53.324589 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:54 crc kubenswrapper[4834]: E0121 14:32:54.268297 4834 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 14:32:54 crc kubenswrapper[4834]: I0121 14:32:54.327735 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:32:54 crc kubenswrapper[4834]: E0121 14:32:54.328080 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6qwpj_openshift-ovn-kubernetes(0b3931d0-e57b-457f-94da-b56c92b40090)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" Jan 21 14:32:54 crc kubenswrapper[4834]: E0121 14:32:54.569984 4834 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:32:55 crc kubenswrapper[4834]: I0121 14:32:55.324359 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:55 crc kubenswrapper[4834]: I0121 14:32:55.324482 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:55 crc kubenswrapper[4834]: E0121 14:32:55.324580 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:55 crc kubenswrapper[4834]: I0121 14:32:55.324634 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:55 crc kubenswrapper[4834]: I0121 14:32:55.324875 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:55 crc kubenswrapper[4834]: E0121 14:32:55.325188 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:55 crc kubenswrapper[4834]: E0121 14:32:55.325377 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:55 crc kubenswrapper[4834]: E0121 14:32:55.325532 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:57 crc kubenswrapper[4834]: I0121 14:32:57.323564 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:57 crc kubenswrapper[4834]: I0121 14:32:57.323649 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:57 crc kubenswrapper[4834]: E0121 14:32:57.323732 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:57 crc kubenswrapper[4834]: E0121 14:32:57.323823 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:57 crc kubenswrapper[4834]: I0121 14:32:57.323984 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:57 crc kubenswrapper[4834]: E0121 14:32:57.324163 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:57 crc kubenswrapper[4834]: I0121 14:32:57.324246 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:57 crc kubenswrapper[4834]: E0121 14:32:57.324511 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:59 crc kubenswrapper[4834]: I0121 14:32:59.324184 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:32:59 crc kubenswrapper[4834]: I0121 14:32:59.324261 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:32:59 crc kubenswrapper[4834]: E0121 14:32:59.324313 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:32:59 crc kubenswrapper[4834]: I0121 14:32:59.324378 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:32:59 crc kubenswrapper[4834]: E0121 14:32:59.324466 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:32:59 crc kubenswrapper[4834]: E0121 14:32:59.324531 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:32:59 crc kubenswrapper[4834]: I0121 14:32:59.325019 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:32:59 crc kubenswrapper[4834]: E0121 14:32:59.325118 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:32:59 crc kubenswrapper[4834]: E0121 14:32:59.571577 4834 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:33:01 crc kubenswrapper[4834]: I0121 14:33:01.324237 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:01 crc kubenswrapper[4834]: I0121 14:33:01.324308 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:01 crc kubenswrapper[4834]: E0121 14:33:01.324360 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:33:01 crc kubenswrapper[4834]: I0121 14:33:01.324357 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:01 crc kubenswrapper[4834]: E0121 14:33:01.324468 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:33:01 crc kubenswrapper[4834]: I0121 14:33:01.324308 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:01 crc kubenswrapper[4834]: E0121 14:33:01.324856 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:33:01 crc kubenswrapper[4834]: E0121 14:33:01.324985 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:33:01 crc kubenswrapper[4834]: I0121 14:33:01.325007 4834 scope.go:117] "RemoveContainer" containerID="17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f" Jan 21 14:33:01 crc kubenswrapper[4834]: I0121 14:33:01.995005 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/1.log" Jan 21 14:33:01 crc kubenswrapper[4834]: I0121 14:33:01.995400 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gd9jh" event={"ID":"dbe1b4f9-f835-43ba-9496-a9e60af3b87f","Type":"ContainerStarted","Data":"97e7484e5783d038480e79d49aa8e44f76b3324401232d77cab73d9076110755"} Jan 21 14:33:03 crc kubenswrapper[4834]: I0121 14:33:03.324258 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:03 crc kubenswrapper[4834]: I0121 14:33:03.324295 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:03 crc kubenswrapper[4834]: I0121 14:33:03.324310 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:03 crc kubenswrapper[4834]: I0121 14:33:03.324258 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:03 crc kubenswrapper[4834]: E0121 14:33:03.324402 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:33:03 crc kubenswrapper[4834]: E0121 14:33:03.324485 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:33:03 crc kubenswrapper[4834]: E0121 14:33:03.324649 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:33:03 crc kubenswrapper[4834]: E0121 14:33:03.324678 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:33:04 crc kubenswrapper[4834]: E0121 14:33:04.572090 4834 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:33:05 crc kubenswrapper[4834]: I0121 14:33:05.323781 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:05 crc kubenswrapper[4834]: I0121 14:33:05.323852 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:05 crc kubenswrapper[4834]: I0121 14:33:05.323915 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:05 crc kubenswrapper[4834]: E0121 14:33:05.323920 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:33:05 crc kubenswrapper[4834]: E0121 14:33:05.324036 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:33:05 crc kubenswrapper[4834]: E0121 14:33:05.324097 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:33:05 crc kubenswrapper[4834]: I0121 14:33:05.324091 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:05 crc kubenswrapper[4834]: E0121 14:33:05.324177 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:33:07 crc kubenswrapper[4834]: I0121 14:33:07.324442 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:07 crc kubenswrapper[4834]: I0121 14:33:07.324515 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:07 crc kubenswrapper[4834]: E0121 14:33:07.324561 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:33:07 crc kubenswrapper[4834]: I0121 14:33:07.324442 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:07 crc kubenswrapper[4834]: E0121 14:33:07.324635 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:33:07 crc kubenswrapper[4834]: I0121 14:33:07.324458 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:07 crc kubenswrapper[4834]: E0121 14:33:07.324728 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:33:07 crc kubenswrapper[4834]: E0121 14:33:07.324805 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:33:08 crc kubenswrapper[4834]: I0121 14:33:08.324618 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:33:09 crc kubenswrapper[4834]: I0121 14:33:09.324119 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:09 crc kubenswrapper[4834]: I0121 14:33:09.324132 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:09 crc kubenswrapper[4834]: I0121 14:33:09.324178 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:09 crc kubenswrapper[4834]: I0121 14:33:09.324273 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:09 crc kubenswrapper[4834]: E0121 14:33:09.324470 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:33:09 crc kubenswrapper[4834]: E0121 14:33:09.324559 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:33:09 crc kubenswrapper[4834]: E0121 14:33:09.324416 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:33:09 crc kubenswrapper[4834]: E0121 14:33:09.324659 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:33:09 crc kubenswrapper[4834]: E0121 14:33:09.573452 4834 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:33:09 crc kubenswrapper[4834]: I0121 14:33:09.877193 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dtqf2"] Jan 21 14:33:10 crc kubenswrapper[4834]: I0121 14:33:10.027105 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/3.log" Jan 21 14:33:10 crc kubenswrapper[4834]: I0121 14:33:10.031973 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerStarted","Data":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} Jan 21 14:33:10 crc kubenswrapper[4834]: I0121 14:33:10.032603 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:33:10 crc kubenswrapper[4834]: I0121 14:33:10.033099 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:10 crc kubenswrapper[4834]: E0121 14:33:10.033255 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:33:11 crc kubenswrapper[4834]: I0121 14:33:11.323859 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:11 crc kubenswrapper[4834]: I0121 14:33:11.324056 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:11 crc kubenswrapper[4834]: I0121 14:33:11.324178 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:11 crc kubenswrapper[4834]: E0121 14:33:11.324177 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:33:11 crc kubenswrapper[4834]: E0121 14:33:11.324318 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:33:11 crc kubenswrapper[4834]: E0121 14:33:11.324497 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:33:12 crc kubenswrapper[4834]: I0121 14:33:12.323834 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:12 crc kubenswrapper[4834]: E0121 14:33:12.324035 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:33:13 crc kubenswrapper[4834]: I0121 14:33:13.603332 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:13 crc kubenswrapper[4834]: I0121 14:33:13.603450 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:13 crc kubenswrapper[4834]: E0121 14:33:13.603486 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 14:33:13 crc kubenswrapper[4834]: I0121 14:33:13.603518 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:13 crc kubenswrapper[4834]: E0121 14:33:13.603635 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 14:33:13 crc kubenswrapper[4834]: I0121 14:33:13.603651 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:13 crc kubenswrapper[4834]: E0121 14:33:13.603698 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 14:33:13 crc kubenswrapper[4834]: E0121 14:33:13.603756 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtqf2" podUID="d31034df-9ceb-49b0-9ad5-334dcaa28fa4" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.324653 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.324696 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.324662 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.324687 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.327352 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.327839 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.328174 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.328538 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.328706 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 14:33:15 crc kubenswrapper[4834]: I0121 14:33:15.330532 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 14:33:17 crc kubenswrapper[4834]: I0121 14:33:17.114883 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:33:17 crc kubenswrapper[4834]: I0121 14:33:17.114970 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.036873 4834 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.088498 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podStartSLOduration=125.088460535 podStartE2EDuration="2m5.088460535s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:10.065412752 +0000 UTC m=+136.039761817" watchObservedRunningTime="2026-01-21 14:33:20.088460535 +0000 UTC m=+146.062809620" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.089552 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.090455 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.095483 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pcngd"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.096342 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.097352 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vzwpb"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.097771 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.098637 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-59dsk"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.099358 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.104805 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lgkd6"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.105681 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.107395 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.109192 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.110538 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.111341 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.112093 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.112657 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.112784 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjnhd"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.112849 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113030 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113133 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113192 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113306 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113329 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113052 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113451 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113452 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113570 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113677 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113707 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.113806 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.114181 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.114212 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.114406 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.114534 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.114819 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.114849 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.115193 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.115412 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.114546 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.114547 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.115675 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.115686 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.116276 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.116520 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.118120 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5hf5s"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.119226 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5hf5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.119285 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7thnr"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.120056 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.125113 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zjflv"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.125661 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.140723 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.140911 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.141326 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.141346 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.141374 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.142155 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.142386 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.142548 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.142752 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.164474 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.171857 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.174308 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.174885 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.175414 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.175634 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.175812 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.175945 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176375 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afeaf40b-1ed4-464e-a550-60dce40a85f2-serving-cert\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176409 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176438 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176461 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wml69\" (UniqueName: \"kubernetes.io/projected/92411afe-95fe-481a-ac22-4a411f4ff7f3-kube-api-access-wml69\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176483 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls7r9\" (UniqueName: \"kubernetes.io/projected/5ef76e94-0cfd-430e-80cf-adb712ae9101-kube-api-access-ls7r9\") pod \"openshift-apiserver-operator-796bbdcf4f-zzhln\" (UID: \"5ef76e94-0cfd-430e-80cf-adb712ae9101\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176505 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176516 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176527 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7c6174-21c8-4685-8c9f-3898b211fc35-config\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176547 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-config\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176571 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176595 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74c8l\" (UniqueName: \"kubernetes.io/projected/be302830-09db-4bde-8611-08739d8dff31-kube-api-access-74c8l\") pod \"openshift-config-operator-7777fb866f-7thnr\" (UID: \"be302830-09db-4bde-8611-08739d8dff31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176615 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-service-ca\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176635 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ef76e94-0cfd-430e-80cf-adb712ae9101-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zzhln\" (UID: \"5ef76e94-0cfd-430e-80cf-adb712ae9101\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176671 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176670 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0110ce00-4e75-42eb-ab42-75df43f68cbe-auth-proxy-config\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176795 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcdd4\" (UniqueName: \"kubernetes.io/projected/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-kube-api-access-rcdd4\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176819 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-audit\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176840 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176877 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176900 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-oauth-serving-cert\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176921 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-serving-cert\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176958 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176963 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.176986 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fs6\" (UniqueName: \"kubernetes.io/projected/2ab89550-989b-47f5-8877-aae1cb61fafd-kube-api-access-x8fs6\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177009 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177032 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177056 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef76e94-0cfd-430e-80cf-adb712ae9101-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zzhln\" (UID: \"5ef76e94-0cfd-430e-80cf-adb712ae9101\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177083 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-audit-dir\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177104 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-config\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177126 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177147 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afeaf40b-1ed4-464e-a550-60dce40a85f2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177169 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg84h\" (UniqueName: \"kubernetes.io/projected/5f689c0c-55d1-4533-8447-b934821c0b0b-kube-api-access-sg84h\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177196 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd7c6174-21c8-4685-8c9f-3898b211fc35-trusted-ca\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177220 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-dir\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177241 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-encryption-config\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177265 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jpwj\" (UniqueName: \"kubernetes.io/projected/7908fe78-158f-45d2-9e1e-2357a6f9cd42-kube-api-access-5jpwj\") pod \"cluster-samples-operator-665b6dd947-6l6w6\" (UID: \"7908fe78-158f-45d2-9e1e-2357a6f9cd42\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177289 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-serving-cert\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177310 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-client-ca\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177330 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7908fe78-158f-45d2-9e1e-2357a6f9cd42-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6l6w6\" (UID: \"7908fe78-158f-45d2-9e1e-2357a6f9cd42\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177348 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-image-import-ca\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177366 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afeaf40b-1ed4-464e-a550-60dce40a85f2-service-ca-bundle\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177394 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbwb\" (UniqueName: \"kubernetes.io/projected/0110ce00-4e75-42eb-ab42-75df43f68cbe-kube-api-access-6rbwb\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177494 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177575 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177642 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177772 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177493 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177840 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-config\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177858 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-etcd-client\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177872 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be302830-09db-4bde-8611-08739d8dff31-serving-cert\") pod \"openshift-config-operator-7777fb866f-7thnr\" (UID: \"be302830-09db-4bde-8611-08739d8dff31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177887 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168c6706-629a-4de3-9010-4a6ad7fb1f60-config\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177946 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/168c6706-629a-4de3-9010-4a6ad7fb1f60-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177969 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-oauth-config\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.177989 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0110ce00-4e75-42eb-ab42-75df43f68cbe-machine-approver-tls\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178003 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178018 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0110ce00-4e75-42eb-ab42-75df43f68cbe-config\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178019 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178093 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178179 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178033 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f689c0c-55d1-4533-8447-b934821c0b0b-serving-cert\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178262 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/168c6706-629a-4de3-9010-4a6ad7fb1f60-images\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178300 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-policies\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178330 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-trusted-ca-bundle\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178361 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/be302830-09db-4bde-8611-08739d8dff31-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7thnr\" (UID: \"be302830-09db-4bde-8611-08739d8dff31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178387 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdqkr\" (UniqueName: \"kubernetes.io/projected/51da62a2-0544-4231-8ab0-0b452ff8d2af-kube-api-access-bdqkr\") pod \"downloads-7954f5f757-5hf5s\" (UID: \"51da62a2-0544-4231-8ab0-0b452ff8d2af\") " pod="openshift-console/downloads-7954f5f757-5hf5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178414 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cntvx\" (UniqueName: \"kubernetes.io/projected/afeaf40b-1ed4-464e-a550-60dce40a85f2-kube-api-access-cntvx\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178435 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbb5\" (UniqueName: \"kubernetes.io/projected/cd7c6174-21c8-4685-8c9f-3898b211fc35-kube-api-access-2pbb5\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178471 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd7c6174-21c8-4685-8c9f-3898b211fc35-serving-cert\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178498 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-node-pullsecrets\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178523 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeaf40b-1ed4-464e-a550-60dce40a85f2-config\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178561 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178586 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2w77\" (UniqueName: \"kubernetes.io/projected/168c6706-629a-4de3-9010-4a6ad7fb1f60-kube-api-access-v2w77\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178272 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178774 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178866 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178307 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178994 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178375 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178408 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.179095 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178432 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.179145 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.178487 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.179216 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.179272 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.179366 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.179686 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.179795 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.179869 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.180366 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.180765 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.181084 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.184273 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.184991 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.186886 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4sz2q"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.187460 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f4xlb"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.187830 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.188689 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.190368 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gg52f"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.190954 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.184652 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.190348 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.202417 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.208853 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.209093 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.209161 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.209362 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.209374 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.210703 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.210957 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.211529 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.211680 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.211843 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.211998 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.212174 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.212337 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.212476 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.212511 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.212604 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.212655 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.212737 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.212805 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.212973 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xt48p"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.233887 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.234570 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.240220 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.258953 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.259301 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.259357 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.260049 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.260445 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.260530 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.260627 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.261057 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.261416 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.262978 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.263391 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.263791 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.264196 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.265375 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.267145 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.267271 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.268774 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qg6w9"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.269361 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.269736 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.270398 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.270660 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.271206 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.276876 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fl4gd"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.277485 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.277895 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.278002 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.277904 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.278761 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279249 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-serving-cert\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279291 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afeaf40b-1ed4-464e-a550-60dce40a85f2-serving-cert\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279319 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279343 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279366 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279388 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wml69\" (UniqueName: \"kubernetes.io/projected/92411afe-95fe-481a-ac22-4a411f4ff7f3-kube-api-access-wml69\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279411 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls7r9\" (UniqueName: \"kubernetes.io/projected/5ef76e94-0cfd-430e-80cf-adb712ae9101-kube-api-access-ls7r9\") pod \"openshift-apiserver-operator-796bbdcf4f-zzhln\" (UID: \"5ef76e94-0cfd-430e-80cf-adb712ae9101\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279433 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7c6174-21c8-4685-8c9f-3898b211fc35-config\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279454 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/df1ea242-a6d3-430d-a33c-5e275f4855dd-etcd-client\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279474 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46g47\" (UniqueName: \"kubernetes.io/projected/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-kube-api-access-46g47\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279498 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-config\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279520 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279544 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/428c9d34-0ff9-4979-8a9b-ae8171b73a20-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279566 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5733e12b-1157-455a-a5fb-06f8bfde751f-metrics-certs\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279589 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74c8l\" (UniqueName: \"kubernetes.io/projected/be302830-09db-4bde-8611-08739d8dff31-kube-api-access-74c8l\") pod \"openshift-config-operator-7777fb866f-7thnr\" (UID: \"be302830-09db-4bde-8611-08739d8dff31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279609 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-service-ca\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279630 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ef76e94-0cfd-430e-80cf-adb712ae9101-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zzhln\" (UID: \"5ef76e94-0cfd-430e-80cf-adb712ae9101\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279662 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0110ce00-4e75-42eb-ab42-75df43f68cbe-auth-proxy-config\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279687 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcdd4\" (UniqueName: \"kubernetes.io/projected/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-kube-api-access-rcdd4\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279713 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6lfr\" (UniqueName: \"kubernetes.io/projected/0d4953e7-9186-4e62-a45c-fffa78bba767-kube-api-access-d6lfr\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279734 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62b731df-5898-4def-baa5-bc423a8c542b-metrics-tls\") pod \"dns-operator-744455d44c-gg52f\" (UID: \"62b731df-5898-4def-baa5-bc423a8c542b\") " pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279767 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-audit\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279790 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279813 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzdrm\" (UniqueName: \"kubernetes.io/projected/62b731df-5898-4def-baa5-bc423a8c542b-kube-api-access-qzdrm\") pod \"dns-operator-744455d44c-gg52f\" (UID: \"62b731df-5898-4def-baa5-bc423a8c542b\") " pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279838 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279863 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-oauth-serving-cert\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279884 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/428c9d34-0ff9-4979-8a9b-ae8171b73a20-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279907 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-serving-cert\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279951 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.279976 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8fs6\" (UniqueName: \"kubernetes.io/projected/2ab89550-989b-47f5-8877-aae1cb61fafd-kube-api-access-x8fs6\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.280000 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5733e12b-1157-455a-a5fb-06f8bfde751f-default-certificate\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.280024 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-audit-dir\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.280049 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.280073 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.280098 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef76e94-0cfd-430e-80cf-adb712ae9101-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zzhln\" (UID: \"5ef76e94-0cfd-430e-80cf-adb712ae9101\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282207 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afeaf40b-1ed4-464e-a550-60dce40a85f2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282248 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg84h\" (UniqueName: \"kubernetes.io/projected/5f689c0c-55d1-4533-8447-b934821c0b0b-kube-api-access-sg84h\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282279 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-config\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282306 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282330 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1ea242-a6d3-430d-a33c-5e275f4855dd-config\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282352 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-config\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282373 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/df1ea242-a6d3-430d-a33c-5e275f4855dd-etcd-service-ca\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282400 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd7c6174-21c8-4685-8c9f-3898b211fc35-trusted-ca\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282423 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5733e12b-1157-455a-a5fb-06f8bfde751f-service-ca-bundle\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282449 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-encryption-config\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282472 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-dir\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282498 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jpwj\" (UniqueName: \"kubernetes.io/projected/7908fe78-158f-45d2-9e1e-2357a6f9cd42-kube-api-access-5jpwj\") pod \"cluster-samples-operator-665b6dd947-6l6w6\" (UID: \"7908fe78-158f-45d2-9e1e-2357a6f9cd42\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282522 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f16257-5a68-4505-8a6b-0073cf2e1080-config\") pod \"kube-controller-manager-operator-78b949d7b-lb7gq\" (UID: \"e0f16257-5a68-4505-8a6b-0073cf2e1080\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282547 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr729\" (UniqueName: \"kubernetes.io/projected/0fd88645-79b0-4d58-985d-75e35a14230e-kube-api-access-rr729\") pod \"openshift-controller-manager-operator-756b6f6bc6-6stcc\" (UID: \"0fd88645-79b0-4d58-985d-75e35a14230e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282612 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-client-ca\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282669 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7908fe78-158f-45d2-9e1e-2357a6f9cd42-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6l6w6\" (UID: \"7908fe78-158f-45d2-9e1e-2357a6f9cd42\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282690 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-serving-cert\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282708 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-client-ca\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282728 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmf8d\" (UniqueName: \"kubernetes.io/projected/df1ea242-a6d3-430d-a33c-5e275f4855dd-kube-api-access-qmf8d\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282749 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-image-import-ca\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282765 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afeaf40b-1ed4-464e-a550-60dce40a85f2-service-ca-bundle\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282784 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqfk5\" (UniqueName: \"kubernetes.io/projected/5733e12b-1157-455a-a5fb-06f8bfde751f-kube-api-access-jqfk5\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282867 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rbwb\" (UniqueName: \"kubernetes.io/projected/0110ce00-4e75-42eb-ab42-75df43f68cbe-kube-api-access-6rbwb\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282891 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df1ea242-a6d3-430d-a33c-5e275f4855dd-serving-cert\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282921 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282958 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d4953e7-9186-4e62-a45c-fffa78bba767-metrics-tls\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282966 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-oauth-serving-cert\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282975 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/df1ea242-a6d3-430d-a33c-5e275f4855dd-etcd-ca\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.283776 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7c6174-21c8-4685-8c9f-3898b211fc35-config\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.284770 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-client-ca\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.284896 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afeaf40b-1ed4-464e-a550-60dce40a85f2-service-ca-bundle\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.285645 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.285858 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-image-import-ca\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.286115 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.287104 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.280856 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.287747 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.288431 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.288458 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.288862 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.289225 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.289587 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.282968 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0110ce00-4e75-42eb-ab42-75df43f68cbe-auth-proxy-config\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.289716 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-config\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.289770 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd88645-79b0-4d58-985d-75e35a14230e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6stcc\" (UID: \"0fd88645-79b0-4d58-985d-75e35a14230e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.289805 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-etcd-client\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.289829 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be302830-09db-4bde-8611-08739d8dff31-serving-cert\") pod \"openshift-config-operator-7777fb866f-7thnr\" (UID: \"be302830-09db-4bde-8611-08739d8dff31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.289852 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168c6706-629a-4de3-9010-4a6ad7fb1f60-config\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.289984 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-oauth-config\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.290017 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/168c6706-629a-4de3-9010-4a6ad7fb1f60-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.290037 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5733e12b-1157-455a-a5fb-06f8bfde751f-stats-auth\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.290059 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d4953e7-9186-4e62-a45c-fffa78bba767-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.290097 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/428c9d34-0ff9-4979-8a9b-ae8171b73a20-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.290131 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pcngd"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.290154 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.290540 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8sn2t"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.291281 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.291395 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.280966 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.291769 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.291953 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.291966 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.280312 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292111 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292236 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292396 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292533 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.290136 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0110ce00-4e75-42eb-ab42-75df43f68cbe-machine-approver-tls\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292672 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292704 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/428c9d34-0ff9-4979-8a9b-ae8171b73a20-kube-api-access-5zlwc\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292731 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292772 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0110ce00-4e75-42eb-ab42-75df43f68cbe-config\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292794 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d4953e7-9186-4e62-a45c-fffa78bba767-trusted-ca\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292821 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-policies\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292832 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-config\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292843 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f689c0c-55d1-4533-8447-b934821c0b0b-serving-cert\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292865 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/168c6706-629a-4de3-9010-4a6ad7fb1f60-images\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292887 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f16257-5a68-4505-8a6b-0073cf2e1080-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lb7gq\" (UID: \"e0f16257-5a68-4505-8a6b-0073cf2e1080\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292913 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/be302830-09db-4bde-8611-08739d8dff31-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7thnr\" (UID: \"be302830-09db-4bde-8611-08739d8dff31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292959 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-trusted-ca-bundle\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.292983 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fd88645-79b0-4d58-985d-75e35a14230e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6stcc\" (UID: \"0fd88645-79b0-4d58-985d-75e35a14230e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.293011 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cntvx\" (UniqueName: \"kubernetes.io/projected/afeaf40b-1ed4-464e-a550-60dce40a85f2-kube-api-access-cntvx\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.293032 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdqkr\" (UniqueName: \"kubernetes.io/projected/51da62a2-0544-4231-8ab0-0b452ff8d2af-kube-api-access-bdqkr\") pod \"downloads-7954f5f757-5hf5s\" (UID: \"51da62a2-0544-4231-8ab0-0b452ff8d2af\") " pod="openshift-console/downloads-7954f5f757-5hf5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.293093 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.293432 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-config\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.281339 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.294068 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ef76e94-0cfd-430e-80cf-adb712ae9101-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zzhln\" (UID: \"5ef76e94-0cfd-430e-80cf-adb712ae9101\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.294569 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.294978 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-service-ca\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.296338 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0110ce00-4e75-42eb-ab42-75df43f68cbe-machine-approver-tls\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.296525 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-policies\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.296774 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-serving-cert\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.297147 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0110ce00-4e75-42eb-ab42-75df43f68cbe-config\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.297284 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-serving-cert\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.297329 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.297390 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/be302830-09db-4bde-8611-08739d8dff31-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7thnr\" (UID: \"be302830-09db-4bde-8611-08739d8dff31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.297406 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-audit-dir\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.297421 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd7c6174-21c8-4685-8c9f-3898b211fc35-serving-cert\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.298393 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168c6706-629a-4de3-9010-4a6ad7fb1f60-config\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.298431 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-config\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.298613 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-trusted-ca-bundle\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.298674 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbb5\" (UniqueName: \"kubernetes.io/projected/cd7c6174-21c8-4685-8c9f-3898b211fc35-kube-api-access-2pbb5\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.298705 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0f16257-5a68-4505-8a6b-0073cf2e1080-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lb7gq\" (UID: \"e0f16257-5a68-4505-8a6b-0073cf2e1080\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.298732 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-node-pullsecrets\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.298756 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2w77\" (UniqueName: \"kubernetes.io/projected/168c6706-629a-4de3-9010-4a6ad7fb1f60-kube-api-access-v2w77\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.299114 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/168c6706-629a-4de3-9010-4a6ad7fb1f60-images\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.299166 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeaf40b-1ed4-464e-a550-60dce40a85f2-config\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.299196 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.299779 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.301174 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afeaf40b-1ed4-464e-a550-60dce40a85f2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.322451 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be302830-09db-4bde-8611-08739d8dff31-serving-cert\") pod \"openshift-config-operator-7777fb866f-7thnr\" (UID: \"be302830-09db-4bde-8611-08739d8dff31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.322534 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-59dsk"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.322580 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v9x5s"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.322686 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-etcd-client\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.298923 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-node-pullsecrets\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.323540 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd7c6174-21c8-4685-8c9f-3898b211fc35-trusted-ca\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.323747 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.323989 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7908fe78-158f-45d2-9e1e-2357a6f9cd42-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6l6w6\" (UID: \"7908fe78-158f-45d2-9e1e-2357a6f9cd42\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.324034 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-dir\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.324298 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-audit\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.324911 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.325485 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.325811 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.326115 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeaf40b-1ed4-464e-a550-60dce40a85f2-config\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.326126 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afeaf40b-1ed4-464e-a550-60dce40a85f2-serving-cert\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.326495 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ef76e94-0cfd-430e-80cf-adb712ae9101-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zzhln\" (UID: \"5ef76e94-0cfd-430e-80cf-adb712ae9101\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.326879 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-encryption-config\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.327232 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.327840 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.328039 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.329214 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-oauth-config\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.329302 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.330350 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.331562 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/168c6706-629a-4de3-9010-4a6ad7fb1f60-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.331652 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f689c0c-55d1-4533-8447-b934821c0b0b-serving-cert\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.332719 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.334766 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd7c6174-21c8-4685-8c9f-3898b211fc35-serving-cert\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.335258 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.339300 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vzwpb"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.339344 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.339357 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lgkd6"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.340042 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bk949"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.340882 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bk949" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.341348 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.343003 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.344022 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zjflv"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.344973 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-296l9"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.345653 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.346303 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.348644 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjnhd"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.349742 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fl4gd"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.350682 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.351879 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5hf5s"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.352730 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.355252 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gg52f"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.356385 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.357311 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.357858 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.358694 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.359443 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.360400 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9v8ms"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.360984 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9v8ms" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.361361 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cdhjz"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.362654 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.362727 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.363519 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qg6w9"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.364642 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.365674 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.366708 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f4xlb"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.367943 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.368920 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xt48p"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.369881 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7thnr"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.371080 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bk949"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.372211 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.373265 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.373516 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.374552 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.375694 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8sn2t"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.376720 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9v8ms"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.377795 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cdhjz"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.378864 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.379840 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.381018 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.382000 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v9x5s"] Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.393005 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.399881 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/57d9e464-9d27-44b1-bace-6e559dba046f-signing-key\") pod \"service-ca-9c57cc56f-8sn2t\" (UID: \"57d9e464-9d27-44b1-bace-6e559dba046f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.399912 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.399952 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.399991 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmf8d\" (UniqueName: \"kubernetes.io/projected/df1ea242-a6d3-430d-a33c-5e275f4855dd-kube-api-access-qmf8d\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400018 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr729\" (UniqueName: \"kubernetes.io/projected/0fd88645-79b0-4d58-985d-75e35a14230e-kube-api-access-rr729\") pod \"openshift-controller-manager-operator-756b6f6bc6-6stcc\" (UID: \"0fd88645-79b0-4d58-985d-75e35a14230e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400040 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmjdb\" (UniqueName: \"kubernetes.io/projected/5b5315e3-10a5-4388-9e91-69e5e64bd718-kube-api-access-vmjdb\") pod \"migrator-59844c95c7-2dwbc\" (UID: \"5b5315e3-10a5-4388-9e91-69e5e64bd718\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400065 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqfk5\" (UniqueName: \"kubernetes.io/projected/5733e12b-1157-455a-a5fb-06f8bfde751f-kube-api-access-jqfk5\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400109 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ebfb6e-89ec-4564-b488-08c666fb91af-config\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400135 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-csi-data-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400167 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-audit-dir\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400199 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccctm\" (UniqueName: \"kubernetes.io/projected/f164c103-b8ff-419d-bad2-6b17381ffee1-kube-api-access-ccctm\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400231 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d4953e7-9186-4e62-a45c-fffa78bba767-metrics-tls\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400345 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6s6k\" (UniqueName: \"kubernetes.io/projected/57d9e464-9d27-44b1-bace-6e559dba046f-kube-api-access-h6s6k\") pod \"service-ca-9c57cc56f-8sn2t\" (UID: \"57d9e464-9d27-44b1-bace-6e559dba046f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400465 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff11e3b3-3893-41f2-a824-b04255f56040-apiservice-cert\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400593 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcvxv\" (UniqueName: \"kubernetes.io/projected/deaf8010-1fc2-41b3-b94b-6339c4846f32-kube-api-access-pcvxv\") pod \"machine-config-controller-84d6567774-cwcqj\" (UID: \"deaf8010-1fc2-41b3-b94b-6339c4846f32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400768 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/428c9d34-0ff9-4979-8a9b-ae8171b73a20-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400808 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400846 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/741cdc21-96df-4d88-a05e-d877ea76aa87-images\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400895 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhdfv\" (UniqueName: \"kubernetes.io/projected/da7838db-42fa-496d-bdec-712d5fcc46c6-kube-api-access-nhdfv\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400946 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d4953e7-9186-4e62-a45c-fffa78bba767-trusted-ca\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.400977 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32b16913-b72e-4e5f-b684-913111a08bd7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fl4gd\" (UID: \"32b16913-b72e-4e5f-b684-913111a08bd7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401008 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-config-volume\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401039 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hgkd\" (UniqueName: \"kubernetes.io/projected/b9ebfb6e-89ec-4564-b488-08c666fb91af-kube-api-access-9hgkd\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401084 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/815061df-a412-4c5e-bb54-5e10e4d420f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401135 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8hh\" (UniqueName: \"kubernetes.io/projected/9f95091b-8acb-473d-ac10-c27bcb7e856e-kube-api-access-ss8hh\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401199 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f164c103-b8ff-419d-bad2-6b17381ffee1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401261 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz6h4\" (UniqueName: \"kubernetes.io/projected/741cdc21-96df-4d88-a05e-d877ea76aa87-kube-api-access-gz6h4\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401286 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/468c7958-916e-4d52-bd3b-d8eeeaa09172-metrics-tls\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401307 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46g47\" (UniqueName: \"kubernetes.io/projected/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-kube-api-access-46g47\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401359 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11505903-40a1-4bd2-94c6-b8abbf15225e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-228c4\" (UID: \"11505903-40a1-4bd2-94c6-b8abbf15225e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401378 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11505903-40a1-4bd2-94c6-b8abbf15225e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-228c4\" (UID: \"11505903-40a1-4bd2-94c6-b8abbf15225e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401425 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6lfr\" (UniqueName: \"kubernetes.io/projected/0d4953e7-9186-4e62-a45c-fffa78bba767-kube-api-access-d6lfr\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401447 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62b731df-5898-4def-baa5-bc423a8c542b-metrics-tls\") pod \"dns-operator-744455d44c-gg52f\" (UID: \"62b731df-5898-4def-baa5-bc423a8c542b\") " pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401524 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cdt5\" (UniqueName: \"kubernetes.io/projected/9d684d88-9233-4a40-b9ca-4c393f2d7939-kube-api-access-5cdt5\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401549 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45nc2\" (UniqueName: \"kubernetes.io/projected/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-kube-api-access-45nc2\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401578 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/428c9d34-0ff9-4979-8a9b-ae8171b73a20-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401595 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/815061df-a412-4c5e-bb54-5e10e4d420f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401619 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-secret-volume\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401636 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-serving-cert\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401672 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpstr\" (UniqueName: \"kubernetes.io/projected/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-kube-api-access-qpstr\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401688 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/741cdc21-96df-4d88-a05e-d877ea76aa87-proxy-tls\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401719 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1ea242-a6d3-430d-a33c-5e275f4855dd-config\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401736 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-config\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401752 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2m7l\" (UniqueName: \"kubernetes.io/projected/ff11e3b3-3893-41f2-a824-b04255f56040-kube-api-access-h2m7l\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401767 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ebfb6e-89ec-4564-b488-08c666fb91af-serving-cert\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401783 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlr4t\" (UniqueName: \"kubernetes.io/projected/468c7958-916e-4d52-bd3b-d8eeeaa09172-kube-api-access-qlr4t\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401801 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/df1ea242-a6d3-430d-a33c-5e275f4855dd-etcd-service-ca\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401815 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/57d9e464-9d27-44b1-bace-6e559dba046f-signing-cabundle\") pod \"service-ca-9c57cc56f-8sn2t\" (UID: \"57d9e464-9d27-44b1-bace-6e559dba046f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401832 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11505903-40a1-4bd2-94c6-b8abbf15225e-config\") pod \"kube-apiserver-operator-766d6c64bb-228c4\" (UID: \"11505903-40a1-4bd2-94c6-b8abbf15225e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401848 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-node-bootstrap-token\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401873 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f16257-5a68-4505-8a6b-0073cf2e1080-config\") pod \"kube-controller-manager-operator-78b949d7b-lb7gq\" (UID: \"e0f16257-5a68-4505-8a6b-0073cf2e1080\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401899 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-client-ca\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401915 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msbk4\" (UniqueName: \"kubernetes.io/projected/f5bf163e-0c90-49ce-abd4-c39d628a6d09-kube-api-access-msbk4\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401951 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df1ea242-a6d3-430d-a33c-5e275f4855dd-serving-cert\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.401972 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d684d88-9233-4a40-b9ca-4c393f2d7939-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402000 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/815061df-a412-4c5e-bb54-5e10e4d420f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402014 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-profile-collector-cert\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402035 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/df1ea242-a6d3-430d-a33c-5e275f4855dd-etcd-ca\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402052 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd88645-79b0-4d58-985d-75e35a14230e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6stcc\" (UID: \"0fd88645-79b0-4d58-985d-75e35a14230e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402068 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-socket-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402085 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/428c9d34-0ff9-4979-8a9b-ae8171b73a20-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402089 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrwl\" (UniqueName: \"kubernetes.io/projected/9637e38c-b666-480c-a92a-71b40d1a41d0-kube-api-access-nsrwl\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402443 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402522 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/741cdc21-96df-4d88-a05e-d877ea76aa87-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402551 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5733e12b-1157-455a-a5fb-06f8bfde751f-stats-auth\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402577 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxj8v\" (UniqueName: \"kubernetes.io/projected/a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9-kube-api-access-vxj8v\") pod \"ingress-canary-9v8ms\" (UID: \"a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9\") " pod="openshift-ingress-canary/ingress-canary-9v8ms" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402600 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d4953e7-9186-4e62-a45c-fffa78bba767-trusted-ca\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402611 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d4953e7-9186-4e62-a45c-fffa78bba767-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402684 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/428c9d34-0ff9-4979-8a9b-ae8171b73a20-kube-api-access-5zlwc\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402716 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9-cert\") pod \"ingress-canary-9v8ms\" (UID: \"a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9\") " pod="openshift-ingress-canary/ingress-canary-9v8ms" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402739 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-registration-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402791 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f16257-5a68-4505-8a6b-0073cf2e1080-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lb7gq\" (UID: \"e0f16257-5a68-4505-8a6b-0073cf2e1080\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402817 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fd88645-79b0-4d58-985d-75e35a14230e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6stcc\" (UID: \"0fd88645-79b0-4d58-985d-75e35a14230e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402838 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsw52\" (UniqueName: \"kubernetes.io/projected/a3e0c585-701a-4ec9-a901-88877d73a876-kube-api-access-bsw52\") pod \"package-server-manager-789f6589d5-7v775\" (UID: \"a3e0c585-701a-4ec9-a901-88877d73a876\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402868 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.402998 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0f16257-5a68-4505-8a6b-0073cf2e1080-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lb7gq\" (UID: \"e0f16257-5a68-4505-8a6b-0073cf2e1080\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403031 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f164c103-b8ff-419d-bad2-6b17381ffee1-srv-cert\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403077 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-serving-cert\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403113 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-etcd-client\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403158 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-srv-cert\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403184 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/df1ea242-a6d3-430d-a33c-5e275f4855dd-etcd-client\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403204 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403229 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-certs\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403263 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/deaf8010-1fc2-41b3-b94b-6339c4846f32-proxy-tls\") pod \"machine-config-controller-84d6567774-cwcqj\" (UID: \"deaf8010-1fc2-41b3-b94b-6339c4846f32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403298 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8khn\" (UniqueName: \"kubernetes.io/projected/d64b1f50-155f-44f0-b9ba-90e1e59fc1ce-kube-api-access-r8khn\") pod \"control-plane-machine-set-operator-78cbb6b69f-5596r\" (UID: \"d64b1f50-155f-44f0-b9ba-90e1e59fc1ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403324 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltrqv\" (UniqueName: \"kubernetes.io/projected/32b16913-b72e-4e5f-b684-913111a08bd7-kube-api-access-ltrqv\") pod \"multus-admission-controller-857f4d67dd-fl4gd\" (UID: \"32b16913-b72e-4e5f-b684-913111a08bd7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403343 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-audit-policies\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403358 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-plugins-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403383 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/428c9d34-0ff9-4979-8a9b-ae8171b73a20-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403449 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f16257-5a68-4505-8a6b-0073cf2e1080-config\") pod \"kube-controller-manager-operator-78b949d7b-lb7gq\" (UID: \"e0f16257-5a68-4505-8a6b-0073cf2e1080\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403519 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5733e12b-1157-455a-a5fb-06f8bfde751f-metrics-certs\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403563 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzdrm\" (UniqueName: \"kubernetes.io/projected/62b731df-5898-4def-baa5-bc423a8c542b-kube-api-access-qzdrm\") pod \"dns-operator-744455d44c-gg52f\" (UID: \"62b731df-5898-4def-baa5-bc423a8c542b\") " pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403595 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/deaf8010-1fc2-41b3-b94b-6339c4846f32-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cwcqj\" (UID: \"deaf8010-1fc2-41b3-b94b-6339c4846f32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403637 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d684d88-9233-4a40-b9ca-4c393f2d7939-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403658 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/468c7958-916e-4d52-bd3b-d8eeeaa09172-config-volume\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403678 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5733e12b-1157-455a-a5fb-06f8bfde751f-default-certificate\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403699 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-mountpoint-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403731 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fd88645-79b0-4d58-985d-75e35a14230e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6stcc\" (UID: \"0fd88645-79b0-4d58-985d-75e35a14230e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403744 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff11e3b3-3893-41f2-a824-b04255f56040-tmpfs\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403765 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff11e3b3-3893-41f2-a824-b04255f56040-webhook-cert\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403784 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3e0c585-701a-4ec9-a901-88877d73a876-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7v775\" (UID: \"a3e0c585-701a-4ec9-a901-88877d73a876\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403805 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-encryption-config\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403830 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5733e12b-1157-455a-a5fb-06f8bfde751f-service-ca-bundle\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.403857 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d64b1f50-155f-44f0-b9ba-90e1e59fc1ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5596r\" (UID: \"d64b1f50-155f-44f0-b9ba-90e1e59fc1ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.405000 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd88645-79b0-4d58-985d-75e35a14230e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6stcc\" (UID: \"0fd88645-79b0-4d58-985d-75e35a14230e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.405565 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5733e12b-1157-455a-a5fb-06f8bfde751f-stats-auth\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.406555 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5733e12b-1157-455a-a5fb-06f8bfde751f-service-ca-bundle\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.406729 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f16257-5a68-4505-8a6b-0073cf2e1080-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lb7gq\" (UID: \"e0f16257-5a68-4505-8a6b-0073cf2e1080\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.406828 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-client-ca\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.407454 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-config\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.407779 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d4953e7-9186-4e62-a45c-fffa78bba767-metrics-tls\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.407898 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-serving-cert\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.409049 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5733e12b-1157-455a-a5fb-06f8bfde751f-default-certificate\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.409213 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5733e12b-1157-455a-a5fb-06f8bfde751f-metrics-certs\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.410231 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/428c9d34-0ff9-4979-8a9b-ae8171b73a20-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.413870 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.433262 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.438141 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62b731df-5898-4def-baa5-bc423a8c542b-metrics-tls\") pod \"dns-operator-744455d44c-gg52f\" (UID: \"62b731df-5898-4def-baa5-bc423a8c542b\") " pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.453051 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.474312 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.493524 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.499595 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df1ea242-a6d3-430d-a33c-5e275f4855dd-serving-cert\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505087 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/815061df-a412-4c5e-bb54-5e10e4d420f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505163 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-secret-volume\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505190 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-serving-cert\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505213 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/741cdc21-96df-4d88-a05e-d877ea76aa87-proxy-tls\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505243 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpstr\" (UniqueName: \"kubernetes.io/projected/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-kube-api-access-qpstr\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505272 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ebfb6e-89ec-4564-b488-08c666fb91af-serving-cert\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505295 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlr4t\" (UniqueName: \"kubernetes.io/projected/468c7958-916e-4d52-bd3b-d8eeeaa09172-kube-api-access-qlr4t\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505327 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2m7l\" (UniqueName: \"kubernetes.io/projected/ff11e3b3-3893-41f2-a824-b04255f56040-kube-api-access-h2m7l\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505352 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/57d9e464-9d27-44b1-bace-6e559dba046f-signing-cabundle\") pod \"service-ca-9c57cc56f-8sn2t\" (UID: \"57d9e464-9d27-44b1-bace-6e559dba046f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505371 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11505903-40a1-4bd2-94c6-b8abbf15225e-config\") pod \"kube-apiserver-operator-766d6c64bb-228c4\" (UID: \"11505903-40a1-4bd2-94c6-b8abbf15225e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505387 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-node-bootstrap-token\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505404 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msbk4\" (UniqueName: \"kubernetes.io/projected/f5bf163e-0c90-49ce-abd4-c39d628a6d09-kube-api-access-msbk4\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505423 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d684d88-9233-4a40-b9ca-4c393f2d7939-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505442 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/815061df-a412-4c5e-bb54-5e10e4d420f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505462 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-profile-collector-cert\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505481 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-socket-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505498 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsrwl\" (UniqueName: \"kubernetes.io/projected/9637e38c-b666-480c-a92a-71b40d1a41d0-kube-api-access-nsrwl\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505521 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/741cdc21-96df-4d88-a05e-d877ea76aa87-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505542 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxj8v\" (UniqueName: \"kubernetes.io/projected/a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9-kube-api-access-vxj8v\") pod \"ingress-canary-9v8ms\" (UID: \"a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9\") " pod="openshift-ingress-canary/ingress-canary-9v8ms" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505559 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9-cert\") pod \"ingress-canary-9v8ms\" (UID: \"a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9\") " pod="openshift-ingress-canary/ingress-canary-9v8ms" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505575 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-registration-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505626 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsw52\" (UniqueName: \"kubernetes.io/projected/a3e0c585-701a-4ec9-a901-88877d73a876-kube-api-access-bsw52\") pod \"package-server-manager-789f6589d5-7v775\" (UID: \"a3e0c585-701a-4ec9-a901-88877d73a876\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505660 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505690 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f164c103-b8ff-419d-bad2-6b17381ffee1-srv-cert\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505723 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-etcd-client\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505745 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-srv-cert\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505766 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505785 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-certs\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505810 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/deaf8010-1fc2-41b3-b94b-6339c4846f32-proxy-tls\") pod \"machine-config-controller-84d6567774-cwcqj\" (UID: \"deaf8010-1fc2-41b3-b94b-6339c4846f32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505828 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8khn\" (UniqueName: \"kubernetes.io/projected/d64b1f50-155f-44f0-b9ba-90e1e59fc1ce-kube-api-access-r8khn\") pod \"control-plane-machine-set-operator-78cbb6b69f-5596r\" (UID: \"d64b1f50-155f-44f0-b9ba-90e1e59fc1ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505847 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltrqv\" (UniqueName: \"kubernetes.io/projected/32b16913-b72e-4e5f-b684-913111a08bd7-kube-api-access-ltrqv\") pod \"multus-admission-controller-857f4d67dd-fl4gd\" (UID: \"32b16913-b72e-4e5f-b684-913111a08bd7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505865 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-audit-policies\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505893 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-plugins-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.505987 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/deaf8010-1fc2-41b3-b94b-6339c4846f32-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cwcqj\" (UID: \"deaf8010-1fc2-41b3-b94b-6339c4846f32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506017 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d684d88-9233-4a40-b9ca-4c393f2d7939-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506034 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/468c7958-916e-4d52-bd3b-d8eeeaa09172-config-volume\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506049 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-mountpoint-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506064 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff11e3b3-3893-41f2-a824-b04255f56040-webhook-cert\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506081 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3e0c585-701a-4ec9-a901-88877d73a876-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7v775\" (UID: \"a3e0c585-701a-4ec9-a901-88877d73a876\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506111 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff11e3b3-3893-41f2-a824-b04255f56040-tmpfs\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506128 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-encryption-config\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506145 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d64b1f50-155f-44f0-b9ba-90e1e59fc1ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5596r\" (UID: \"d64b1f50-155f-44f0-b9ba-90e1e59fc1ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506186 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/57d9e464-9d27-44b1-bace-6e559dba046f-signing-key\") pod \"service-ca-9c57cc56f-8sn2t\" (UID: \"57d9e464-9d27-44b1-bace-6e559dba046f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506217 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506243 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506280 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmjdb\" (UniqueName: \"kubernetes.io/projected/5b5315e3-10a5-4388-9e91-69e5e64bd718-kube-api-access-vmjdb\") pod \"migrator-59844c95c7-2dwbc\" (UID: \"5b5315e3-10a5-4388-9e91-69e5e64bd718\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506305 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ebfb6e-89ec-4564-b488-08c666fb91af-config\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506324 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-csi-data-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506348 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-audit-dir\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506369 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccctm\" (UniqueName: \"kubernetes.io/projected/f164c103-b8ff-419d-bad2-6b17381ffee1-kube-api-access-ccctm\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506409 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6s6k\" (UniqueName: \"kubernetes.io/projected/57d9e464-9d27-44b1-bace-6e559dba046f-kube-api-access-h6s6k\") pod \"service-ca-9c57cc56f-8sn2t\" (UID: \"57d9e464-9d27-44b1-bace-6e559dba046f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506433 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff11e3b3-3893-41f2-a824-b04255f56040-apiservice-cert\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506457 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcvxv\" (UniqueName: \"kubernetes.io/projected/deaf8010-1fc2-41b3-b94b-6339c4846f32-kube-api-access-pcvxv\") pod \"machine-config-controller-84d6567774-cwcqj\" (UID: \"deaf8010-1fc2-41b3-b94b-6339c4846f32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506494 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/741cdc21-96df-4d88-a05e-d877ea76aa87-images\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506528 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhdfv\" (UniqueName: \"kubernetes.io/projected/da7838db-42fa-496d-bdec-712d5fcc46c6-kube-api-access-nhdfv\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506567 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32b16913-b72e-4e5f-b684-913111a08bd7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fl4gd\" (UID: \"32b16913-b72e-4e5f-b684-913111a08bd7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506603 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-config-volume\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506632 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hgkd\" (UniqueName: \"kubernetes.io/projected/b9ebfb6e-89ec-4564-b488-08c666fb91af-kube-api-access-9hgkd\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506651 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-plugins-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506663 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/815061df-a412-4c5e-bb54-5e10e4d420f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506803 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss8hh\" (UniqueName: \"kubernetes.io/projected/9f95091b-8acb-473d-ac10-c27bcb7e856e-kube-api-access-ss8hh\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506838 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f164c103-b8ff-419d-bad2-6b17381ffee1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506873 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz6h4\" (UniqueName: \"kubernetes.io/projected/741cdc21-96df-4d88-a05e-d877ea76aa87-kube-api-access-gz6h4\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506890 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/468c7958-916e-4d52-bd3b-d8eeeaa09172-metrics-tls\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506939 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11505903-40a1-4bd2-94c6-b8abbf15225e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-228c4\" (UID: \"11505903-40a1-4bd2-94c6-b8abbf15225e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506960 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11505903-40a1-4bd2-94c6-b8abbf15225e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-228c4\" (UID: \"11505903-40a1-4bd2-94c6-b8abbf15225e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.507005 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cdt5\" (UniqueName: \"kubernetes.io/projected/9d684d88-9233-4a40-b9ca-4c393f2d7939-kube-api-access-5cdt5\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.507023 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45nc2\" (UniqueName: \"kubernetes.io/projected/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-kube-api-access-45nc2\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.507186 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-csi-data-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.507244 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-audit-dir\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.507355 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/741cdc21-96df-4d88-a05e-d877ea76aa87-auth-proxy-config\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.507430 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/deaf8010-1fc2-41b3-b94b-6339c4846f32-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cwcqj\" (UID: \"deaf8010-1fc2-41b3-b94b-6339c4846f32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.506527 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-socket-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.507568 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-mountpoint-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.507575 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9637e38c-b666-480c-a92a-71b40d1a41d0-registration-dir\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.508072 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ff11e3b3-3893-41f2-a824-b04255f56040-tmpfs\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.513901 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.518159 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/df1ea242-a6d3-430d-a33c-5e275f4855dd-etcd-client\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.533165 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.554088 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.563373 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df1ea242-a6d3-430d-a33c-5e275f4855dd-config\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.592954 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.603330 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/df1ea242-a6d3-430d-a33c-5e275f4855dd-etcd-ca\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.614807 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.615498 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/df1ea242-a6d3-430d-a33c-5e275f4855dd-etcd-service-ca\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.632682 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.654055 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.673981 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.693559 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.700874 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/deaf8010-1fc2-41b3-b94b-6339c4846f32-proxy-tls\") pod \"machine-config-controller-84d6567774-cwcqj\" (UID: \"deaf8010-1fc2-41b3-b94b-6339c4846f32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.713225 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.733835 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.743866 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11505903-40a1-4bd2-94c6-b8abbf15225e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-228c4\" (UID: \"11505903-40a1-4bd2-94c6-b8abbf15225e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.754751 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.757571 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11505903-40a1-4bd2-94c6-b8abbf15225e-config\") pod \"kube-apiserver-operator-766d6c64bb-228c4\" (UID: \"11505903-40a1-4bd2-94c6-b8abbf15225e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.773382 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.793645 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.813495 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.833640 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.841655 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-etcd-client\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.852914 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.858312 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.873833 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.880705 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-serving-cert\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.894317 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.914662 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.933604 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.942497 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-encryption-config\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.954240 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.958237 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-audit-policies\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.974388 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.978767 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:20 crc kubenswrapper[4834]: I0121 14:33:20.995309 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.014758 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.034280 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.043525 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d64b1f50-155f-44f0-b9ba-90e1e59fc1ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5596r\" (UID: \"d64b1f50-155f-44f0-b9ba-90e1e59fc1ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.053449 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.073088 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.079138 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/741cdc21-96df-4d88-a05e-d877ea76aa87-images\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.093027 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.100066 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/741cdc21-96df-4d88-a05e-d877ea76aa87-proxy-tls\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.113260 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.120921 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32b16913-b72e-4e5f-b684-913111a08bd7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fl4gd\" (UID: \"32b16913-b72e-4e5f-b684-913111a08bd7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.132567 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.170894 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcdd4\" (UniqueName: \"kubernetes.io/projected/5c8ce07a-fac7-43b4-88f3-4e43da0c75bb-kube-api-access-rcdd4\") pod \"apiserver-76f77b778f-hjnhd\" (UID: \"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb\") " pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.178394 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.188986 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wml69\" (UniqueName: \"kubernetes.io/projected/92411afe-95fe-481a-ac22-4a411f4ff7f3-kube-api-access-wml69\") pod \"console-f9d7485db-vzwpb\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.216702 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls7r9\" (UniqueName: \"kubernetes.io/projected/5ef76e94-0cfd-430e-80cf-adb712ae9101-kube-api-access-ls7r9\") pod \"openshift-apiserver-operator-796bbdcf4f-zzhln\" (UID: \"5ef76e94-0cfd-430e-80cf-adb712ae9101\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.233906 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.243720 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rbwb\" (UniqueName: \"kubernetes.io/projected/0110ce00-4e75-42eb-ab42-75df43f68cbe-kube-api-access-6rbwb\") pod \"machine-approver-56656f9798-ktjs8\" (UID: \"0110ce00-4e75-42eb-ab42-75df43f68cbe\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.253266 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.273565 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.293837 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.313190 4834 request.go:700] Waited for 1.020931968s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.314904 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.321574 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f164c103-b8ff-419d-bad2-6b17381ffee1-srv-cert\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.332731 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.353701 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.362607 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff11e3b3-3893-41f2-a824-b04255f56040-webhook-cert\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.372877 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.373292 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff11e3b3-3893-41f2-a824-b04255f56040-apiservice-cert\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.381072 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3e0c585-701a-4ec9-a901-88877d73a876-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7v775\" (UID: \"a3e0c585-701a-4ec9-a901-88877d73a876\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.407578 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.408844 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg84h\" (UniqueName: \"kubernetes.io/projected/5f689c0c-55d1-4533-8447-b934821c0b0b-kube-api-access-sg84h\") pod \"route-controller-manager-6576b87f9c-qv272\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.413837 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.433352 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.453365 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.461725 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/57d9e464-9d27-44b1-bace-6e559dba046f-signing-key\") pod \"service-ca-9c57cc56f-8sn2t\" (UID: \"57d9e464-9d27-44b1-bace-6e559dba046f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.466164 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.468024 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.473545 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: W0121 14:33:21.486107 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0110ce00_4e75_42eb_ab42_75df43f68cbe.slice/crio-455448838d622ebfa5ce03c9f7b95441eda46b5367e417fa7487234c170de5b8 WatchSource:0}: Error finding container 455448838d622ebfa5ce03c9f7b95441eda46b5367e417fa7487234c170de5b8: Status 404 returned error can't find the container with id 455448838d622ebfa5ce03c9f7b95441eda46b5367e417fa7487234c170de5b8 Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.493495 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.497335 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/57d9e464-9d27-44b1-bace-6e559dba046f-signing-cabundle\") pod \"service-ca-9c57cc56f-8sn2t\" (UID: \"57d9e464-9d27-44b1-bace-6e559dba046f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.501191 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjnhd"] Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506411 4834 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506505 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-profile-collector-cert podName:9f95091b-8acb-473d-ac10-c27bcb7e856e nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.006482398 +0000 UTC m=+147.980831433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-profile-collector-cert") pod "catalog-operator-68c6474976-wl7f4" (UID: "9f95091b-8acb-473d-ac10-c27bcb7e856e") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506507 4834 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506529 4834 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506592 4834 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506594 4834 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506554 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/815061df-a412-4c5e-bb54-5e10e4d420f1-serving-cert podName:815061df-a412-4c5e-bb54-5e10e4d420f1 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.00654102 +0000 UTC m=+147.980890065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/815061df-a412-4c5e-bb54-5e10e4d420f1-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-4pbft" (UID: "815061df-a412-4c5e-bb54-5e10e4d420f1") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506531 4834 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506704 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-secret-volume podName:98ce7851-62b5-4cb5-b7d4-2e03f1606cb0 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.006676175 +0000 UTC m=+147.981025230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-secret-volume") pod "collect-profiles-29483430-cdmzv" (UID: "98ce7851-62b5-4cb5-b7d4-2e03f1606cb0") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506727 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d684d88-9233-4a40-b9ca-4c393f2d7939-config podName:9d684d88-9233-4a40-b9ca-4c393f2d7939 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.006718306 +0000 UTC m=+147.981067461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9d684d88-9233-4a40-b9ca-4c393f2d7939-config") pod "kube-storage-version-migrator-operator-b67b599dd-v5kgf" (UID: "9d684d88-9233-4a40-b9ca-4c393f2d7939") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506767 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9ebfb6e-89ec-4564-b488-08c666fb91af-serving-cert podName:b9ebfb6e-89ec-4564-b488-08c666fb91af nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.006758277 +0000 UTC m=+147.981107382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b9ebfb6e-89ec-4564-b488-08c666fb91af-serving-cert") pod "service-ca-operator-777779d784-4r2rw" (UID: "b9ebfb6e-89ec-4564-b488-08c666fb91af") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506782 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-node-bootstrap-token podName:f5bf163e-0c90-49ce-abd4-c39d628a6d09 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.006774768 +0000 UTC m=+147.981123913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-node-bootstrap-token") pod "machine-config-server-296l9" (UID: "f5bf163e-0c90-49ce-abd4-c39d628a6d09") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506853 4834 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.506884 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/815061df-a412-4c5e-bb54-5e10e4d420f1-config podName:815061df-a412-4c5e-bb54-5e10e4d420f1 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.006877281 +0000 UTC m=+147.981226416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/815061df-a412-4c5e-bb54-5e10e4d420f1-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-4pbft" (UID: "815061df-a412-4c5e-bb54-5e10e4d420f1") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507177 4834 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507216 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9ebfb6e-89ec-4564-b488-08c666fb91af-config podName:b9ebfb6e-89ec-4564-b488-08c666fb91af nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.007206902 +0000 UTC m=+147.981556027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b9ebfb6e-89ec-4564-b488-08c666fb91af-config") pod "service-ca-operator-777779d784-4r2rw" (UID: "b9ebfb6e-89ec-4564-b488-08c666fb91af") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507262 4834 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507290 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-srv-cert podName:9f95091b-8acb-473d-ac10-c27bcb7e856e nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.007281814 +0000 UTC m=+147.981630939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-srv-cert") pod "catalog-operator-68c6474976-wl7f4" (UID: "9f95091b-8acb-473d-ac10-c27bcb7e856e") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507307 4834 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507359 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-certs podName:f5bf163e-0c90-49ce-abd4-c39d628a6d09 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.007351836 +0000 UTC m=+147.981700971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-certs") pod "machine-config-server-296l9" (UID: "f5bf163e-0c90-49ce-abd4-c39d628a6d09") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507373 4834 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507424 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-operator-metrics podName:da7838db-42fa-496d-bdec-712d5fcc46c6 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.007415749 +0000 UTC m=+147.981764894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-operator-metrics") pod "marketplace-operator-79b997595-v9x5s" (UID: "da7838db-42fa-496d-bdec-712d5fcc46c6") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507703 4834 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507713 4834 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507762 4834 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507790 4834 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507798 4834 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507739 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/468c7958-916e-4d52-bd3b-d8eeeaa09172-metrics-tls podName:468c7958-916e-4d52-bd3b-d8eeeaa09172 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.007730218 +0000 UTC m=+147.982079353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/468c7958-916e-4d52-bd3b-d8eeeaa09172-metrics-tls") pod "dns-default-bk949" (UID: "468c7958-916e-4d52-bd3b-d8eeeaa09172") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507877 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-config-volume podName:98ce7851-62b5-4cb5-b7d4-2e03f1606cb0 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.007867532 +0000 UTC m=+147.982216577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-config-volume") pod "collect-profiles-29483430-cdmzv" (UID: "98ce7851-62b5-4cb5-b7d4-2e03f1606cb0") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507894 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f164c103-b8ff-419d-bad2-6b17381ffee1-profile-collector-cert podName:f164c103-b8ff-419d-bad2-6b17381ffee1 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.007885893 +0000 UTC m=+147.982235038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/f164c103-b8ff-419d-bad2-6b17381ffee1-profile-collector-cert") pod "olm-operator-6b444d44fb-hxjcg" (UID: "f164c103-b8ff-419d-bad2-6b17381ffee1") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507915 4834 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507966 4834 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.507984 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9-cert podName:a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.007901443 +0000 UTC m=+147.982250588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9-cert") pod "ingress-canary-9v8ms" (UID: "a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.508030 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-trusted-ca podName:da7838db-42fa-496d-bdec-712d5fcc46c6 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.008022857 +0000 UTC m=+147.982371942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-trusted-ca") pod "marketplace-operator-79b997595-v9x5s" (UID: "da7838db-42fa-496d-bdec-712d5fcc46c6") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.508044 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/468c7958-916e-4d52-bd3b-d8eeeaa09172-config-volume podName:468c7958-916e-4d52-bd3b-d8eeeaa09172 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.008037528 +0000 UTC m=+147.982386683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/468c7958-916e-4d52-bd3b-d8eeeaa09172-config-volume") pod "dns-default-bk949" (UID: "468c7958-916e-4d52-bd3b-d8eeeaa09172") : failed to sync configmap cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.508058 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d684d88-9233-4a40-b9ca-4c393f2d7939-serving-cert podName:9d684d88-9233-4a40-b9ca-4c393f2d7939 nodeName:}" failed. No retries permitted until 2026-01-21 14:33:22.008051378 +0000 UTC m=+147.982400523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9d684d88-9233-4a40-b9ca-4c393f2d7939-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-v5kgf" (UID: "9d684d88-9233-4a40-b9ca-4c393f2d7939") : failed to sync secret cache: timed out waiting for the condition Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.515638 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.544196 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.557259 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.587843 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.593523 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.614001 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.621962 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.632867 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.653800 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.655015 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vzwpb"] Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.673263 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.694190 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.703425 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln"] Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.712841 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 14:33:21 crc kubenswrapper[4834]: W0121 14:33:21.729241 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ef76e94_0cfd_430e_80cf_adb712ae9101.slice/crio-c3a828c8de412dd2fd298beadb57938d198d723a024e023e429ac7ff164c9599 WatchSource:0}: Error finding container c3a828c8de412dd2fd298beadb57938d198d723a024e023e429ac7ff164c9599: Status 404 returned error can't find the container with id c3a828c8de412dd2fd298beadb57938d198d723a024e023e429ac7ff164c9599 Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.732836 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.753327 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.789969 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdqkr\" (UniqueName: \"kubernetes.io/projected/51da62a2-0544-4231-8ab0-0b452ff8d2af-kube-api-access-bdqkr\") pod \"downloads-7954f5f757-5hf5s\" (UID: \"51da62a2-0544-4231-8ab0-0b452ff8d2af\") " pod="openshift-console/downloads-7954f5f757-5hf5s" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.794320 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272"] Jan 21 14:33:21 crc kubenswrapper[4834]: W0121 14:33:21.805835 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f689c0c_55d1_4533_8447_b934821c0b0b.slice/crio-540f64f3858ec5003e9c692f01144b222c02378c68b63a1e7a684e0d4096bee5 WatchSource:0}: Error finding container 540f64f3858ec5003e9c692f01144b222c02378c68b63a1e7a684e0d4096bee5: Status 404 returned error can't find the container with id 540f64f3858ec5003e9c692f01144b222c02378c68b63a1e7a684e0d4096bee5 Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.807856 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jpwj\" (UniqueName: \"kubernetes.io/projected/7908fe78-158f-45d2-9e1e-2357a6f9cd42-kube-api-access-5jpwj\") pod \"cluster-samples-operator-665b6dd947-6l6w6\" (UID: \"7908fe78-158f-45d2-9e1e-2357a6f9cd42\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.829650 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cntvx\" (UniqueName: \"kubernetes.io/projected/afeaf40b-1ed4-464e-a550-60dce40a85f2-kube-api-access-cntvx\") pod \"authentication-operator-69f744f599-pcngd\" (UID: \"afeaf40b-1ed4-464e-a550-60dce40a85f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.832814 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.842503 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5hf5s" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.853689 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.873425 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.893219 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.959008 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.959534 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:21 crc kubenswrapper[4834]: E0121 14:33:21.959613 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:35:23.959592268 +0000 UTC m=+269.933941323 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.959756 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.962465 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8fs6\" (UniqueName: \"kubernetes.io/projected/2ab89550-989b-47f5-8877-aae1cb61fafd-kube-api-access-x8fs6\") pod \"oauth-openshift-558db77b4-zjflv\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.962546 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.963340 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbb5\" (UniqueName: \"kubernetes.io/projected/cd7c6174-21c8-4685-8c9f-3898b211fc35-kube-api-access-2pbb5\") pod \"console-operator-58897d9998-59dsk\" (UID: \"cd7c6174-21c8-4685-8c9f-3898b211fc35\") " pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.972849 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.977689 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74c8l\" (UniqueName: \"kubernetes.io/projected/be302830-09db-4bde-8611-08739d8dff31-kube-api-access-74c8l\") pod \"openshift-config-operator-7777fb866f-7thnr\" (UID: \"be302830-09db-4bde-8611-08739d8dff31\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.979876 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:21 crc kubenswrapper[4834]: I0121 14:33:21.980117 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.000906 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.013256 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.015244 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2w77\" (UniqueName: \"kubernetes.io/projected/168c6706-629a-4de3-9010-4a6ad7fb1f60-kube-api-access-v2w77\") pod \"machine-api-operator-5694c8668f-lgkd6\" (UID: \"168c6706-629a-4de3-9010-4a6ad7fb1f60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.038008 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.041253 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5hf5s"] Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.052655 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061095 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-config-volume\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061152 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/815061df-a412-4c5e-bb54-5e10e4d420f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061186 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f164c103-b8ff-419d-bad2-6b17381ffee1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061216 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/468c7958-916e-4d52-bd3b-d8eeeaa09172-metrics-tls\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061296 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-secret-volume\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061324 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ebfb6e-89ec-4564-b488-08c666fb91af-serving-cert\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061357 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-node-bootstrap-token\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061389 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d684d88-9233-4a40-b9ca-4c393f2d7939-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061409 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/815061df-a412-4c5e-bb54-5e10e4d420f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061430 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-profile-collector-cert\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061468 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9-cert\") pod \"ingress-canary-9v8ms\" (UID: \"a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9\") " pod="openshift-ingress-canary/ingress-canary-9v8ms" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061508 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061535 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061558 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-srv-cert\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061578 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-certs\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061615 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061645 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d684d88-9233-4a40-b9ca-4c393f2d7939-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061664 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/468c7958-916e-4d52-bd3b-d8eeeaa09172-config-volume\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061704 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061749 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ebfb6e-89ec-4564-b488-08c666fb91af-config\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.061810 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/815061df-a412-4c5e-bb54-5e10e4d420f1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.062986 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9ebfb6e-89ec-4564-b488-08c666fb91af-config\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.063030 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d684d88-9233-4a40-b9ca-4c393f2d7939-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.063644 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.066014 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-srv-cert\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.066103 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f164c103-b8ff-419d-bad2-6b17381ffee1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.066376 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d684d88-9233-4a40-b9ca-4c393f2d7939-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.066506 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.066540 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.066589 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-secret-volume\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.066997 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f95091b-8acb-473d-ac10-c27bcb7e856e-profile-collector-cert\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.067011 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.067591 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9ebfb6e-89ec-4564-b488-08c666fb91af-serving-cert\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.069558 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/815061df-a412-4c5e-bb54-5e10e4d420f1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.072783 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.073115 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" event={"ID":"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb","Type":"ContainerStarted","Data":"2b1e69c6799e44d63c16388db72888bdd77fe17bdb281927099ce9ff692232ee"} Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.090281 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" event={"ID":"5ef76e94-0cfd-430e-80cf-adb712ae9101","Type":"ContainerStarted","Data":"c3a828c8de412dd2fd298beadb57938d198d723a024e023e429ac7ff164c9599"} Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.092083 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vzwpb" event={"ID":"92411afe-95fe-481a-ac22-4a411f4ff7f3","Type":"ContainerStarted","Data":"e6859c11940879904507147f98a70bb90f082cdd43bad514a4091259c80f0ddd"} Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.092837 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" event={"ID":"5f689c0c-55d1-4533-8447-b934821c0b0b","Type":"ContainerStarted","Data":"540f64f3858ec5003e9c692f01144b222c02378c68b63a1e7a684e0d4096bee5"} Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.093648 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.093866 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" event={"ID":"0110ce00-4e75-42eb-ab42-75df43f68cbe","Type":"ContainerStarted","Data":"455448838d622ebfa5ce03c9f7b95441eda46b5367e417fa7487234c170de5b8"} Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.094665 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5hf5s" event={"ID":"51da62a2-0544-4231-8ab0-0b452ff8d2af","Type":"ContainerStarted","Data":"4e79f869c161d593e0f13e115b76af9eb2db869e34e791eacd850f377cf9c359"} Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.097077 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.113286 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.124080 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-config-volume\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.134229 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.145121 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/468c7958-916e-4d52-bd3b-d8eeeaa09172-metrics-tls\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.152486 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.153225 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.160207 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.173571 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.175553 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pcngd"] Jan 21 14:33:22 crc kubenswrapper[4834]: W0121 14:33:22.182710 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafeaf40b_1ed4_464e_a550_60dce40a85f2.slice/crio-2cd386d0b02f79a43c67f8773d19be12bb4cb1aafb33dbf53c1265b3146ee113 WatchSource:0}: Error finding container 2cd386d0b02f79a43c67f8773d19be12bb4cb1aafb33dbf53c1265b3146ee113: Status 404 returned error can't find the container with id 2cd386d0b02f79a43c67f8773d19be12bb4cb1aafb33dbf53c1265b3146ee113 Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.183289 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/468c7958-916e-4d52-bd3b-d8eeeaa09172-config-volume\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.195317 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.205998 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-node-bootstrap-token\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.214365 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.232226 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.240726 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.247492 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.249754 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5bf163e-0c90-49ce-abd4-c39d628a6d09-certs\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.253244 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.263000 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.276311 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.296849 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.312989 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.316958 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9-cert\") pod \"ingress-canary-9v8ms\" (UID: \"a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9\") " pod="openshift-ingress-canary/ingress-canary-9v8ms" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.319579 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.331151 4834 request.go:700] Waited for 1.968286438s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.334315 4834 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.356207 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.384668 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.447124 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqfk5\" (UniqueName: \"kubernetes.io/projected/5733e12b-1157-455a-a5fb-06f8bfde751f-kube-api-access-jqfk5\") pod \"router-default-5444994796-4sz2q\" (UID: \"5733e12b-1157-455a-a5fb-06f8bfde751f\") " pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.462606 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmf8d\" (UniqueName: \"kubernetes.io/projected/df1ea242-a6d3-430d-a33c-5e275f4855dd-kube-api-access-qmf8d\") pod \"etcd-operator-b45778765-xt48p\" (UID: \"df1ea242-a6d3-430d-a33c-5e275f4855dd\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.468446 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-59dsk"] Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.489086 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr729\" (UniqueName: \"kubernetes.io/projected/0fd88645-79b0-4d58-985d-75e35a14230e-kube-api-access-rr729\") pod \"openshift-controller-manager-operator-756b6f6bc6-6stcc\" (UID: \"0fd88645-79b0-4d58-985d-75e35a14230e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.492784 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46g47\" (UniqueName: \"kubernetes.io/projected/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-kube-api-access-46g47\") pod \"controller-manager-879f6c89f-f4xlb\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.503839 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.509515 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d4953e7-9186-4e62-a45c-fffa78bba767-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.530434 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6lfr\" (UniqueName: \"kubernetes.io/projected/0d4953e7-9186-4e62-a45c-fffa78bba767-kube-api-access-d6lfr\") pod \"ingress-operator-5b745b69d9-gsfns\" (UID: \"0d4953e7-9186-4e62-a45c-fffa78bba767\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.531196 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.538512 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.550842 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0f16257-5a68-4505-8a6b-0073cf2e1080-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lb7gq\" (UID: \"e0f16257-5a68-4505-8a6b-0073cf2e1080\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.566889 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.572942 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zlwc\" (UniqueName: \"kubernetes.io/projected/428c9d34-0ff9-4979-8a9b-ae8171b73a20-kube-api-access-5zlwc\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.587647 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzdrm\" (UniqueName: \"kubernetes.io/projected/62b731df-5898-4def-baa5-bc423a8c542b-kube-api-access-qzdrm\") pod \"dns-operator-744455d44c-gg52f\" (UID: \"62b731df-5898-4def-baa5-bc423a8c542b\") " pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.616573 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/428c9d34-0ff9-4979-8a9b-ae8171b73a20-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d2dk9\" (UID: \"428c9d34-0ff9-4979-8a9b-ae8171b73a20\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.653699 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsrwl\" (UniqueName: \"kubernetes.io/projected/9637e38c-b666-480c-a92a-71b40d1a41d0-kube-api-access-nsrwl\") pod \"csi-hostpathplugin-cdhjz\" (UID: \"9637e38c-b666-480c-a92a-71b40d1a41d0\") " pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.672829 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2m7l\" (UniqueName: \"kubernetes.io/projected/ff11e3b3-3893-41f2-a824-b04255f56040-kube-api-access-h2m7l\") pod \"packageserver-d55dfcdfc-9v4tg\" (UID: \"ff11e3b3-3893-41f2-a824-b04255f56040\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.677608 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msbk4\" (UniqueName: \"kubernetes.io/projected/f5bf163e-0c90-49ce-abd4-c39d628a6d09-kube-api-access-msbk4\") pod \"machine-config-server-296l9\" (UID: \"f5bf163e-0c90-49ce-abd4-c39d628a6d09\") " pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.680852 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.701315 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/815061df-a412-4c5e-bb54-5e10e4d420f1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4pbft\" (UID: \"815061df-a412-4c5e-bb54-5e10e4d420f1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.723716 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-296l9" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.723851 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpstr\" (UniqueName: \"kubernetes.io/projected/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-kube-api-access-qpstr\") pod \"collect-profiles-29483430-cdmzv\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.735024 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlr4t\" (UniqueName: \"kubernetes.io/projected/468c7958-916e-4d52-bd3b-d8eeeaa09172-kube-api-access-qlr4t\") pod \"dns-default-bk949\" (UID: \"468c7958-916e-4d52-bd3b-d8eeeaa09172\") " pod="openshift-dns/dns-default-bk949" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.746379 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.756873 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmjdb\" (UniqueName: \"kubernetes.io/projected/5b5315e3-10a5-4388-9e91-69e5e64bd718-kube-api-access-vmjdb\") pod \"migrator-59844c95c7-2dwbc\" (UID: \"5b5315e3-10a5-4388-9e91-69e5e64bd718\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.777964 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.782606 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.795056 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.805871 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45nc2\" (UniqueName: \"kubernetes.io/projected/ca0f8daf-3b69-4d64-9292-ad129dc27a0f-kube-api-access-45nc2\") pod \"apiserver-7bbb656c7d-bfxgg\" (UID: \"ca0f8daf-3b69-4d64-9292-ad129dc27a0f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.823293 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccctm\" (UniqueName: \"kubernetes.io/projected/f164c103-b8ff-419d-bad2-6b17381ffee1-kube-api-access-ccctm\") pod \"olm-operator-6b444d44fb-hxjcg\" (UID: \"f164c103-b8ff-419d-bad2-6b17381ffee1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.829652 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6s6k\" (UniqueName: \"kubernetes.io/projected/57d9e464-9d27-44b1-bace-6e559dba046f-kube-api-access-h6s6k\") pod \"service-ca-9c57cc56f-8sn2t\" (UID: \"57d9e464-9d27-44b1-bace-6e559dba046f\") " pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.836318 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lgkd6"] Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.843360 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zjflv"] Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.844648 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7thnr"] Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.850224 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltrqv\" (UniqueName: \"kubernetes.io/projected/32b16913-b72e-4e5f-b684-913111a08bd7-kube-api-access-ltrqv\") pod \"multus-admission-controller-857f4d67dd-fl4gd\" (UID: \"32b16913-b72e-4e5f-b684-913111a08bd7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.850697 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8khn\" (UniqueName: \"kubernetes.io/projected/d64b1f50-155f-44f0-b9ba-90e1e59fc1ce-kube-api-access-r8khn\") pod \"control-plane-machine-set-operator-78cbb6b69f-5596r\" (UID: \"d64b1f50-155f-44f0-b9ba-90e1e59fc1ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.853061 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.856074 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6"] Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.894294 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.895019 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcvxv\" (UniqueName: \"kubernetes.io/projected/deaf8010-1fc2-41b3-b94b-6339c4846f32-kube-api-access-pcvxv\") pod \"machine-config-controller-84d6567774-cwcqj\" (UID: \"deaf8010-1fc2-41b3-b94b-6339c4846f32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.899770 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxj8v\" (UniqueName: \"kubernetes.io/projected/a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9-kube-api-access-vxj8v\") pod \"ingress-canary-9v8ms\" (UID: \"a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9\") " pod="openshift-ingress-canary/ingress-canary-9v8ms" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.908214 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.919708 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.920665 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhdfv\" (UniqueName: \"kubernetes.io/projected/da7838db-42fa-496d-bdec-712d5fcc46c6-kube-api-access-nhdfv\") pod \"marketplace-operator-79b997595-v9x5s\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.930361 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.938366 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.939037 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss8hh\" (UniqueName: \"kubernetes.io/projected/9f95091b-8acb-473d-ac10-c27bcb7e856e-kube-api-access-ss8hh\") pod \"catalog-operator-68c6474976-wl7f4\" (UID: \"9f95091b-8acb-473d-ac10-c27bcb7e856e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.952431 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.953436 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz6h4\" (UniqueName: \"kubernetes.io/projected/741cdc21-96df-4d88-a05e-d877ea76aa87-kube-api-access-gz6h4\") pod \"machine-config-operator-74547568cd-g4j7q\" (UID: \"741cdc21-96df-4d88-a05e-d877ea76aa87\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.964285 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.969458 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.986082 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsw52\" (UniqueName: \"kubernetes.io/projected/a3e0c585-701a-4ec9-a901-88877d73a876-kube-api-access-bsw52\") pod \"package-server-manager-789f6589d5-7v775\" (UID: \"a3e0c585-701a-4ec9-a901-88877d73a876\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.988277 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:33:22 crc kubenswrapper[4834]: I0121 14:33:22.999351 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.008785 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.013368 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cdt5\" (UniqueName: \"kubernetes.io/projected/9d684d88-9233-4a40-b9ca-4c393f2d7939-kube-api-access-5cdt5\") pod \"kube-storage-version-migrator-operator-b67b599dd-v5kgf\" (UID: \"9d684d88-9233-4a40-b9ca-4c393f2d7939\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:23 crc kubenswrapper[4834]: W0121 14:33:23.013626 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-77c3e14a9155b0ff4160d4840fc180be0adce6152f30034bd1e0780a3752d51c WatchSource:0}: Error finding container 77c3e14a9155b0ff4160d4840fc180be0adce6152f30034bd1e0780a3752d51c: Status 404 returned error can't find the container with id 77c3e14a9155b0ff4160d4840fc180be0adce6152f30034bd1e0780a3752d51c Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.015778 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hgkd\" (UniqueName: \"kubernetes.io/projected/b9ebfb6e-89ec-4564-b488-08c666fb91af-kube-api-access-9hgkd\") pod \"service-ca-operator-777779d784-4r2rw\" (UID: \"b9ebfb6e-89ec-4564-b488-08c666fb91af\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.021143 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bk949" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.042275 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9v8ms" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.055610 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11505903-40a1-4bd2-94c6-b8abbf15225e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-228c4\" (UID: \"11505903-40a1-4bd2-94c6-b8abbf15225e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.082666 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-registry-tls\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.082781 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-trusted-ca\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.082901 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-bound-sa-token\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.083047 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-registry-certificates\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.083468 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrml\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-kube-api-access-nmrml\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.083596 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.083720 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98c36cc5-0276-4002-943b-030fb686cae6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.083859 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98c36cc5-0276-4002-943b-030fb686cae6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: E0121 14:33:23.085825 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:23.585811243 +0000 UTC m=+149.560160278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.094178 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f4xlb"] Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.097169 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns"] Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.172808 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.178512 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.194012 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.194390 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98c36cc5-0276-4002-943b-030fb686cae6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.194679 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98c36cc5-0276-4002-943b-030fb686cae6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.194990 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-registry-tls\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.195006 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-trusted-ca\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.195042 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-bound-sa-token\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.195123 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-registry-certificates\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.195215 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmrml\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-kube-api-access-nmrml\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: E0121 14:33:23.195988 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:23.695966365 +0000 UTC m=+149.670315410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.199203 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98c36cc5-0276-4002-943b-030fb686cae6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.200638 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-trusted-ca\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.236094 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-registry-tls\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.237400 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" event={"ID":"0110ce00-4e75-42eb-ab42-75df43f68cbe","Type":"ContainerStarted","Data":"71375a6f2746e6beb513bd8af9394c04c5593cd8c85eb8e702f2870ed3287c1b"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.237439 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" event={"ID":"0110ce00-4e75-42eb-ab42-75df43f68cbe","Type":"ContainerStarted","Data":"5ca56b3c3a8ecf4c826362d832dee00c8e64c54325da4d5f31f52e6bc187532d"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.256309 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-registry-certificates\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.258724 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.261679 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.273840 4834 generic.go:334] "Generic (PLEG): container finished" podID="5c8ce07a-fac7-43b4-88f3-4e43da0c75bb" containerID="4b7c7cfc2df06ba40871da2a63646301d0eea7facdfbb6122950b5de69f8254f" exitCode=0 Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.273965 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" event={"ID":"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb","Type":"ContainerDied","Data":"4b7c7cfc2df06ba40871da2a63646301d0eea7facdfbb6122950b5de69f8254f"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.282662 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98c36cc5-0276-4002-943b-030fb686cae6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.291845 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-bound-sa-token\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.293117 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.326408 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: E0121 14:33:23.339703 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:23.839673324 +0000 UTC m=+149.814022539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.342428 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmrml\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-kube-api-access-nmrml\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.345778 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vzwpb" event={"ID":"92411afe-95fe-481a-ac22-4a411f4ff7f3","Type":"ContainerStarted","Data":"2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.370331 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-296l9" event={"ID":"f5bf163e-0c90-49ce-abd4-c39d628a6d09","Type":"ContainerStarted","Data":"0bb983c394c2567a9c4fcf3db31acbbe4ad3566029494c7d0021cf1fd3c4ff13"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.370386 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-296l9" event={"ID":"f5bf163e-0c90-49ce-abd4-c39d628a6d09","Type":"ContainerStarted","Data":"72348b104cf92eaec2f282f966fb3176d3cafe535c695e37b09f7c421ccf3c87"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.381152 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" event={"ID":"2ab89550-989b-47f5-8877-aae1cb61fafd","Type":"ContainerStarted","Data":"f4b9e73085b053737010644f528461ff73d75f21ec272afe47855d0621e6d5ac"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.388187 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg"] Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.390313 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5hf5s" event={"ID":"51da62a2-0544-4231-8ab0-0b452ff8d2af","Type":"ContainerStarted","Data":"668e8cc03ae248b80820d87a1ea1283c1aff4521d4b63fc24d34ff269e62654c"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.390391 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xt48p"] Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.390736 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5hf5s" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.391512 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4sz2q" event={"ID":"5733e12b-1157-455a-a5fb-06f8bfde751f","Type":"ContainerStarted","Data":"d87521c193ab59eb0d2500523e4d101111bb78334b4ab9e7be2f9bdd0c972e1e"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.391640 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4sz2q" event={"ID":"5733e12b-1157-455a-a5fb-06f8bfde751f","Type":"ContainerStarted","Data":"347cc852dcfc284a6d40646869da1a7383f64089f72516bf20a2fa4e65f32cf3"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.398155 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"77c3e14a9155b0ff4160d4840fc180be0adce6152f30034bd1e0780a3752d51c"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.399977 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" event={"ID":"5f689c0c-55d1-4533-8447-b934821c0b0b","Type":"ContainerStarted","Data":"c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.400071 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.412398 4834 patch_prober.go:28] interesting pod/downloads-7954f5f757-5hf5s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.412469 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5hf5s" podUID="51da62a2-0544-4231-8ab0-0b452ff8d2af" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.418743 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" event={"ID":"5ef76e94-0cfd-430e-80cf-adb712ae9101","Type":"ContainerStarted","Data":"6312c5e6e0a726db81ba919adc0d1b48afa13f168cfdf55ac13830d743d927cb"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.429133 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:23 crc kubenswrapper[4834]: E0121 14:33:23.429685 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:23.92966944 +0000 UTC m=+149.904018485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.439234 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" event={"ID":"afeaf40b-1ed4-464e-a550-60dce40a85f2","Type":"ContainerStarted","Data":"ae7833df5f163e63a32701478dc6f8130373cf6d0495fc6a8e1f68dd45d21a41"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.439302 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" event={"ID":"afeaf40b-1ed4-464e-a550-60dce40a85f2","Type":"ContainerStarted","Data":"2cd386d0b02f79a43c67f8773d19be12bb4cb1aafb33dbf53c1265b3146ee113"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.459715 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-59dsk" event={"ID":"cd7c6174-21c8-4685-8c9f-3898b211fc35","Type":"ContainerStarted","Data":"e9077494f195cd44bb00d808f39c08046fcb6d34fa4b67135ba2ab1842cbee8c"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.459763 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-59dsk" event={"ID":"cd7c6174-21c8-4685-8c9f-3898b211fc35","Type":"ContainerStarted","Data":"fb72c4723ec2f18a01bf2b18196e17be74dc4ca9a15b18fa26caa4d1946837f7"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.459985 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.465604 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"11b92b7fac6098cb154a737909c3359fee687f07f0cafc599c00605f02f97a30"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.466970 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq"] Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.469499 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" event={"ID":"168c6706-629a-4de3-9010-4a6ad7fb1f60","Type":"ContainerStarted","Data":"7fd67434aa38286840e62dfa7cd531142aac8f9db2ed6c8ff979942ad497cbf4"} Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.487755 4834 patch_prober.go:28] interesting pod/console-operator-58897d9998-59dsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.488208 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-59dsk" podUID="cd7c6174-21c8-4685-8c9f-3898b211fc35" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.531078 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: E0121 14:33:23.534340 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.034329096 +0000 UTC m=+150.008678141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.534620 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.577774 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:23 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:23 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:23 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.577824 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.636837 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:23 crc kubenswrapper[4834]: E0121 14:33:23.637233 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.137214485 +0000 UTC m=+150.111563530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:23 crc kubenswrapper[4834]: W0121 14:33:23.684336 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f16257_5a68_4505_8a6b_0073cf2e1080.slice/crio-9a1e06c9a5ec1e03b64bbb93b7333ab5466bb2e1a86c7e9597fe96a577610196 WatchSource:0}: Error finding container 9a1e06c9a5ec1e03b64bbb93b7333ab5466bb2e1a86c7e9597fe96a577610196: Status 404 returned error can't find the container with id 9a1e06c9a5ec1e03b64bbb93b7333ab5466bb2e1a86c7e9597fe96a577610196 Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.730560 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5hf5s" podStartSLOduration=128.730544959 podStartE2EDuration="2m8.730544959s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:23.730048303 +0000 UTC m=+149.704397338" watchObservedRunningTime="2026-01-21 14:33:23.730544959 +0000 UTC m=+149.704894004" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.751396 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: E0121 14:33:23.751755 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.251743988 +0000 UTC m=+150.226093033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.789568 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9"] Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.824915 4834 csr.go:261] certificate signing request csr-xnm6r is approved, waiting to be issued Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.847351 4834 csr.go:257] certificate signing request csr-xnm6r is issued Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.848703 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.853539 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:23 crc kubenswrapper[4834]: E0121 14:33:23.854992 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.354975968 +0000 UTC m=+150.329325013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.865491 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4sz2q" podStartSLOduration=128.865466164 podStartE2EDuration="2m8.865466164s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:23.863051237 +0000 UTC m=+149.837400282" watchObservedRunningTime="2026-01-21 14:33:23.865466164 +0000 UTC m=+149.839815209" Jan 21 14:33:23 crc kubenswrapper[4834]: I0121 14:33:23.958518 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:23 crc kubenswrapper[4834]: E0121 14:33:23.958913 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.458898351 +0000 UTC m=+150.433247396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.062556 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cdhjz"] Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.062833 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc"] Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.068712 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.069012 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.568996832 +0000 UTC m=+150.543345877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.071054 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc"] Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.161092 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vzwpb" podStartSLOduration=130.161062784 podStartE2EDuration="2m10.161062784s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:24.08607248 +0000 UTC m=+150.060421525" watchObservedRunningTime="2026-01-21 14:33:24.161062784 +0000 UTC m=+150.135411839" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.170967 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.171280 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.671266061 +0000 UTC m=+150.645615106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.273863 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.274520 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.77447174 +0000 UTC m=+150.748820785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.274597 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.275019 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.775004058 +0000 UTC m=+150.749353103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.287782 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-296l9" podStartSLOduration=4.2877637459999995 podStartE2EDuration="4.287763746s" podCreationTimestamp="2026-01-21 14:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:24.28536817 +0000 UTC m=+150.259717215" watchObservedRunningTime="2026-01-21 14:33:24.287763746 +0000 UTC m=+150.262112791" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.371486 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ktjs8" podStartSLOduration=130.371468861 podStartE2EDuration="2m10.371468861s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:24.338659069 +0000 UTC m=+150.313008114" watchObservedRunningTime="2026-01-21 14:33:24.371468861 +0000 UTC m=+150.345817906" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.375639 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.381326 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.881300606 +0000 UTC m=+150.855649741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.440895 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" podStartSLOduration=129.440878217 podStartE2EDuration="2m9.440878217s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:24.439914526 +0000 UTC m=+150.414263571" watchObservedRunningTime="2026-01-21 14:33:24.440878217 +0000 UTC m=+150.415227262" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.476704 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.477237 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:24.977221222 +0000 UTC m=+150.951570267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.484731 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-59dsk" podStartSLOduration=130.484714812 podStartE2EDuration="2m10.484714812s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:24.481755747 +0000 UTC m=+150.456104792" watchObservedRunningTime="2026-01-21 14:33:24.484714812 +0000 UTC m=+150.459063857" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.486843 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" event={"ID":"428c9d34-0ff9-4979-8a9b-ae8171b73a20","Type":"ContainerStarted","Data":"39bde1b81e34f208b1428fcf40f3f78b7ffebb84301ad88738b475600062977e"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.494116 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" event={"ID":"ff11e3b3-3893-41f2-a824-b04255f56040","Type":"ContainerStarted","Data":"b8693327c1718af94a1277f8c381f9d1bd6282feecd7e797b6d48f546ab4f984"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.495564 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" event={"ID":"0094f50f-57ac-4cb5-a536-81bf5fc7ae90","Type":"ContainerStarted","Data":"bd512f43b04ad5332a64267215693ca10e7ae0c03d2c127d2b4220909867e2cd"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.497606 4834 generic.go:334] "Generic (PLEG): container finished" podID="be302830-09db-4bde-8611-08739d8dff31" containerID="fa8b9e43f609c76d4a80080038490fa1f067bd4dc1724b6aeb8e8726d04ae5fe" exitCode=0 Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.497693 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" event={"ID":"be302830-09db-4bde-8611-08739d8dff31","Type":"ContainerDied","Data":"fa8b9e43f609c76d4a80080038490fa1f067bd4dc1724b6aeb8e8726d04ae5fe"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.497741 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" event={"ID":"be302830-09db-4bde-8611-08739d8dff31","Type":"ContainerStarted","Data":"73aac56fb14eee2b745ba28051460f4668ec6cec2f67a2ea280027c6dcc33d84"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.503365 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" event={"ID":"9637e38c-b666-480c-a92a-71b40d1a41d0","Type":"ContainerStarted","Data":"e536dd289de7a45e1232430d061810798b94beae6409929183b6ba3b6bd0b9b6"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.520404 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" event={"ID":"df1ea242-a6d3-430d-a33c-5e275f4855dd","Type":"ContainerStarted","Data":"9be6e88164201f85a5ed1da69fd77def439ad2eaefbfd34daf5be9e69b9d1a33"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.522254 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" event={"ID":"0fd88645-79b0-4d58-985d-75e35a14230e","Type":"ContainerStarted","Data":"d82707ab94bc7649f1d54b56eaf53eda83b1ade0ac959c89710081f6c8c8a852"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.523203 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" event={"ID":"0d4953e7-9186-4e62-a45c-fffa78bba767","Type":"ContainerStarted","Data":"305b24661da21e4d491b52bb54254838d2d95e6b4344ec46d39a3ce105fcb2c7"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.529153 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" event={"ID":"7908fe78-158f-45d2-9e1e-2357a6f9cd42","Type":"ContainerStarted","Data":"507b2ed55b774ffec914ff0548225c03de6530f61d4f781369e3923cea0ee25b"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.536363 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b878846cad95c7037cf78dc89418f5f9148c74a81e8d17b07e807fe74d2a1226"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.558134 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" event={"ID":"e0f16257-5a68-4505-8a6b-0073cf2e1080","Type":"ContainerStarted","Data":"9a1e06c9a5ec1e03b64bbb93b7333ab5466bb2e1a86c7e9597fe96a577610196"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.565645 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1c299234fa4322e99add789eb5936fa95acf4daa6d4d9ff875a0914a98589664"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.568282 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc" event={"ID":"5b5315e3-10a5-4388-9e91-69e5e64bd718","Type":"ContainerStarted","Data":"bd08119738efe7d89d7cbd1a81317f35d45b02a04af9d431a66ee18dd20f57f9"} Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.569047 4834 patch_prober.go:28] interesting pod/downloads-7954f5f757-5hf5s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.569099 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5hf5s" podUID="51da62a2-0544-4231-8ab0-0b452ff8d2af" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.584520 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.585601 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.085585467 +0000 UTC m=+151.059934512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.595186 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:24 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:24 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:24 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.595254 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.615616 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcngd" podStartSLOduration=130.615591709 podStartE2EDuration="2m10.615591709s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:24.613921876 +0000 UTC m=+150.588270941" watchObservedRunningTime="2026-01-21 14:33:24.615591709 +0000 UTC m=+150.589940754" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.658282 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-59dsk" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.692646 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.694377 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.194364035 +0000 UTC m=+151.168713080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.794188 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.294167476 +0000 UTC m=+151.268516521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.794213 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.794518 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.794795 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.294785975 +0000 UTC m=+151.269135030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.853454 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gg52f"] Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.861465 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv"] Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.872139 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 14:28:23 +0000 UTC, rotation deadline is 2026-10-28 17:05:29.538710325 +0000 UTC Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.872194 4834 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6722h32m4.666518737s for next certificate rotation Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.878848 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4"] Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.879690 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zzhln" podStartSLOduration=130.879668468 podStartE2EDuration="2m10.879668468s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:24.872433526 +0000 UTC m=+150.846782571" watchObservedRunningTime="2026-01-21 14:33:24.879668468 +0000 UTC m=+150.854017523" Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.897058 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.897611 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.397593303 +0000 UTC m=+151.371942348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.901004 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v9x5s"] Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.953368 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg"] Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.956291 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg"] Jan 21 14:33:24 crc kubenswrapper[4834]: I0121 14:33:24.998391 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:24 crc kubenswrapper[4834]: E0121 14:33:24.998717 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.498703725 +0000 UTC m=+151.473052770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.002701 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bk949"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.100068 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.100290 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.600255791 +0000 UTC m=+151.574604846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.100854 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.101328 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.601309035 +0000 UTC m=+151.575658260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.156605 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.195013 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.201734 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.202260 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.702243382 +0000 UTC m=+151.676592437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.205714 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775"] Jan 21 14:33:25 crc kubenswrapper[4834]: W0121 14:33:25.244297 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd64b1f50_155f_44f0_b9ba_90e1e59fc1ce.slice/crio-b9dd852b5b6a9b7b5805794e2344c21e8eb435b0bc2f771e410ad29b8f7a1a36 WatchSource:0}: Error finding container b9dd852b5b6a9b7b5805794e2344c21e8eb435b0bc2f771e410ad29b8f7a1a36: Status 404 returned error can't find the container with id b9dd852b5b6a9b7b5805794e2344c21e8eb435b0bc2f771e410ad29b8f7a1a36 Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.250056 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fl4gd"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.262016 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.306245 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.306513 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.806501726 +0000 UTC m=+151.780850771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: W0121 14:33:25.344152 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b16913_b72e_4e5f_b684_913111a08bd7.slice/crio-8df5a86dfe8df6802581b18817f7a6a45abe26fb535d4c4466c1bd4ccd48b1e0 WatchSource:0}: Error finding container 8df5a86dfe8df6802581b18817f7a6a45abe26fb535d4c4466c1bd4ccd48b1e0: Status 404 returned error can't find the container with id 8df5a86dfe8df6802581b18817f7a6a45abe26fb535d4c4466c1bd4ccd48b1e0 Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.363011 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.412077 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.412997 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.413056 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.913040612 +0000 UTC m=+151.887389657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.413340 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.413574 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:25.913566948 +0000 UTC m=+151.887915993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.418818 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.436681 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9v8ms"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.477243 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj"] Jan 21 14:33:25 crc kubenswrapper[4834]: W0121 14:33:25.488068 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d684d88_9233_4a40_b9ca_4c393f2d7939.slice/crio-5a8ad64aa527f65793df16249307501537c2f92d97aa14c2e55fe4ef1dbcafee WatchSource:0}: Error finding container 5a8ad64aa527f65793df16249307501537c2f92d97aa14c2e55fe4ef1dbcafee: Status 404 returned error can't find the container with id 5a8ad64aa527f65793df16249307501537c2f92d97aa14c2e55fe4ef1dbcafee Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.501832 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8sn2t"] Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.514956 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.515246 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.015232389 +0000 UTC m=+151.989581434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.537694 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:25 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:25 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:25 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.537737 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.578304 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" event={"ID":"62b731df-5898-4def-baa5-bc423a8c542b","Type":"ContainerStarted","Data":"c56b57f1184a714f67aa1c6b57cf5bba8aa7e14a116ff09d26993654315fc862"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.581445 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" event={"ID":"0d4953e7-9186-4e62-a45c-fffa78bba767","Type":"ContainerStarted","Data":"e633002fca27118c91adfddfdbfc738797e6fa36f2999979999d6e46d5579afa"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.584082 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" event={"ID":"da7838db-42fa-496d-bdec-712d5fcc46c6","Type":"ContainerStarted","Data":"a81f439c4c5265af4317c4bb0ffbbda06c51577dce4b9a2389ceb105701585bf"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.616064 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.616867 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.116856978 +0000 UTC m=+152.091206013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.632533 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" event={"ID":"d64b1f50-155f-44f0-b9ba-90e1e59fc1ce","Type":"ContainerStarted","Data":"b9dd852b5b6a9b7b5805794e2344c21e8eb435b0bc2f771e410ad29b8f7a1a36"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.664620 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" event={"ID":"9f95091b-8acb-473d-ac10-c27bcb7e856e","Type":"ContainerStarted","Data":"c1e375205d6b87d66f565c7e4f0b98df8d96c3fa3bf80642ceb3b1728d7d70a2"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.713299 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" event={"ID":"ff11e3b3-3893-41f2-a824-b04255f56040","Type":"ContainerStarted","Data":"f58290d0a637e09ab72c4adde0ad4a55101e387b1c809ac3fb27d0c714c10579"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.739200 4834 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9v4tg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.739704 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" podUID="ff11e3b3-3893-41f2-a824-b04255f56040" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.739989 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.742315 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.742667 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.242650111 +0000 UTC m=+152.216999146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.752738 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" event={"ID":"11505903-40a1-4bd2-94c6-b8abbf15225e","Type":"ContainerStarted","Data":"92ec670e2cc1b117ca8cc1dac71860bcf6993c2bd8aae8287d801e7ec5598eaa"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.770001 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" podStartSLOduration=130.769981748 podStartE2EDuration="2m10.769981748s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:25.767664284 +0000 UTC m=+151.742013329" watchObservedRunningTime="2026-01-21 14:33:25.769981748 +0000 UTC m=+151.744330793" Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.836201 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" event={"ID":"2ab89550-989b-47f5-8877-aae1cb61fafd","Type":"ContainerStarted","Data":"892fe8086803960c04c15f07066eb5aac6023ef8ddec9da772f711fddb452b1b"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.836740 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.837740 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" podStartSLOduration=130.837702369 podStartE2EDuration="2m10.837702369s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:25.835991934 +0000 UTC m=+151.810340999" watchObservedRunningTime="2026-01-21 14:33:25.837702369 +0000 UTC m=+151.812051404" Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.844228 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.845426 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.345407276 +0000 UTC m=+152.319756391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.846077 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bb829fa93c231b5e1ae673bc7377badb8ffcf01a2f25077bab7da3279adea589"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.858461 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" event={"ID":"b9ebfb6e-89ec-4564-b488-08c666fb91af","Type":"ContainerStarted","Data":"28c6e188d83c4869d6050324dfcd965704112f6aaadcf8478ccb16af444133e9"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.864269 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" event={"ID":"a3e0c585-701a-4ec9-a901-88877d73a876","Type":"ContainerStarted","Data":"68bc4a6cefd0478f822795eb2173a8f6b30702ae7b76de455ca72d440c5d4184"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.874737 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" event={"ID":"9d684d88-9233-4a40-b9ca-4c393f2d7939","Type":"ContainerStarted","Data":"5a8ad64aa527f65793df16249307501537c2f92d97aa14c2e55fe4ef1dbcafee"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.884039 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" podStartSLOduration=131.884017714 podStartE2EDuration="2m11.884017714s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:25.865100308 +0000 UTC m=+151.839449353" watchObservedRunningTime="2026-01-21 14:33:25.884017714 +0000 UTC m=+151.858366759" Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.903092 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc" event={"ID":"5b5315e3-10a5-4388-9e91-69e5e64bd718","Type":"ContainerStarted","Data":"37e4700a771de1ec2c86bcefe78805cee1920f781330c9aec4967c135f6583b4"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.909257 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bk949" event={"ID":"468c7958-916e-4d52-bd3b-d8eeeaa09172","Type":"ContainerStarted","Data":"6df6c4f25fd423afc25133805f9993ec9949e511eecc3f869cf72e50e1b0bf2a"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.930633 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9v8ms" event={"ID":"a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9","Type":"ContainerStarted","Data":"10f718bb4dfccd6350ef959c1c482202d0e173ca00dc561606180861dcc1ac35"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.934118 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" event={"ID":"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0","Type":"ContainerStarted","Data":"6a37c69ed35765442c8056abd30ee87a1e631972922af867ea5aa54d5c423c94"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.945043 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:25 crc kubenswrapper[4834]: E0121 14:33:25.947364 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.447341666 +0000 UTC m=+152.421690711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.975553 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" event={"ID":"815061df-a412-4c5e-bb54-5e10e4d420f1","Type":"ContainerStarted","Data":"a766d2b3cbdf80aa9934f9663b5dd9089f9665546f8e67b9ed54367eaa625db6"} Jan 21 14:33:25 crc kubenswrapper[4834]: I0121 14:33:25.992258 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" podStartSLOduration=130.992241195 podStartE2EDuration="2m10.992241195s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:25.989983002 +0000 UTC m=+151.964332057" watchObservedRunningTime="2026-01-21 14:33:25.992241195 +0000 UTC m=+151.966590240" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.000019 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" podStartSLOduration=131.992923706 podStartE2EDuration="2m11.992923706s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:25.967889764 +0000 UTC m=+151.942238809" watchObservedRunningTime="2026-01-21 14:33:25.992923706 +0000 UTC m=+151.967272761" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.001021 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d5f5e9c9a66eb9597d680d9005075555138f4c29b86f42946983cf1d3e5be8f0"} Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.001674 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.047104 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.049216 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.549204441 +0000 UTC m=+152.523553486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.064075 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" event={"ID":"168c6706-629a-4de3-9010-4a6ad7fb1f60","Type":"ContainerStarted","Data":"c5918a6c129020d88734c8fab37960ec4fea85c210155ffabb3325a45d6dd22d"} Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.079115 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" event={"ID":"741cdc21-96df-4d88-a05e-d877ea76aa87","Type":"ContainerStarted","Data":"3117fa69f5159ca6ebda436dcd7245ad1620df4edc1c5b5d72b5fbd3e34b46ae"} Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.080347 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" event={"ID":"ca0f8daf-3b69-4d64-9292-ad129dc27a0f","Type":"ContainerStarted","Data":"51ad70e5cdc21f8f3919ba811a2fcca81e7bbf118b57d505216803b664a11d7c"} Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.089534 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" event={"ID":"f164c103-b8ff-419d-bad2-6b17381ffee1","Type":"ContainerStarted","Data":"1bb87a1cdfd53b712e72c8734ba1b0f4187a505cfd81027f0b459a729ae5bdb8"} Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.090848 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.107108 4834 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hxjcg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.107164 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" podUID="f164c103-b8ff-419d-bad2-6b17381ffee1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.109583 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" podStartSLOduration=131.109567988 podStartE2EDuration="2m11.109567988s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:26.108971658 +0000 UTC m=+152.083320703" watchObservedRunningTime="2026-01-21 14:33:26.109567988 +0000 UTC m=+152.083917033" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.125724 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" event={"ID":"32b16913-b72e-4e5f-b684-913111a08bd7","Type":"ContainerStarted","Data":"8df5a86dfe8df6802581b18817f7a6a45abe26fb535d4c4466c1bd4ccd48b1e0"} Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.135627 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" event={"ID":"0094f50f-57ac-4cb5-a536-81bf5fc7ae90","Type":"ContainerStarted","Data":"aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7"} Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.136557 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.150085 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.150215 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.65019226 +0000 UTC m=+152.624541305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.150476 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.150723 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.650715367 +0000 UTC m=+152.625064412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.151353 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" podStartSLOduration=131.151339617 podStartE2EDuration="2m11.151339617s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:26.149689174 +0000 UTC m=+152.124038219" watchObservedRunningTime="2026-01-21 14:33:26.151339617 +0000 UTC m=+152.125688672" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.159270 4834 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-f4xlb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.159321 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" podUID="0094f50f-57ac-4cb5-a536-81bf5fc7ae90" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.208349 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" podStartSLOduration=131.208333804 podStartE2EDuration="2m11.208333804s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:26.207275901 +0000 UTC m=+152.181624946" watchObservedRunningTime="2026-01-21 14:33:26.208333804 +0000 UTC m=+152.182682839" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.208469 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" podStartSLOduration=131.208464628 podStartE2EDuration="2m11.208464628s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:26.179621014 +0000 UTC m=+152.153970059" watchObservedRunningTime="2026-01-21 14:33:26.208464628 +0000 UTC m=+152.182813673" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.251222 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.252301 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.752284754 +0000 UTC m=+152.726633799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.279128 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.384049 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.384372 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.88435754 +0000 UTC m=+152.858706585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.486026 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.487757 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:26.987741265 +0000 UTC m=+152.962090310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.540633 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:26 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:26 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:26 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.540963 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.589064 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.589392 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.089380753 +0000 UTC m=+153.063729798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.694100 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.694252 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.194205225 +0000 UTC m=+153.168554270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.694578 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.694873 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.194856246 +0000 UTC m=+153.169205291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.795239 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.795942 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.295911787 +0000 UTC m=+153.270260822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:26 crc kubenswrapper[4834]: I0121 14:33:26.897034 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:26 crc kubenswrapper[4834]: E0121 14:33:26.897463 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.397449782 +0000 UTC m=+153.371798827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.000423 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.000725 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.500709034 +0000 UTC m=+153.475058089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.101561 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.102181 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.602169007 +0000 UTC m=+153.576518052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.202312 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.202602 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.702589007 +0000 UTC m=+153.676938052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.202859 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6stcc" event={"ID":"0fd88645-79b0-4d58-985d-75e35a14230e","Type":"ContainerStarted","Data":"e3df537299d0897cc418364dcb1f242c2f7ab5344953501d6045b24cfc5e360f"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.215186 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" event={"ID":"815061df-a412-4c5e-bb54-5e10e4d420f1","Type":"ContainerStarted","Data":"e376cf4917fce2499492659b4336cad811c0a108318218c45d2e85bf4ec12bd6"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.224303 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bk949" event={"ID":"468c7958-916e-4d52-bd3b-d8eeeaa09172","Type":"ContainerStarted","Data":"1788cea723f4a0bde659121db999698e92f5e59dc1e5ff42a7762eec128f522e"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.233304 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" event={"ID":"e0f16257-5a68-4505-8a6b-0073cf2e1080","Type":"ContainerStarted","Data":"b388fb8617e5e4e3460e31317066fcbe151c525c00bb3daa4fdd5ebe813663b2"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.240258 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xt48p" event={"ID":"df1ea242-a6d3-430d-a33c-5e275f4855dd","Type":"ContainerStarted","Data":"653b06c32b5f66a462e65c01732eb0742ebc241d2ae0a5fee35c9e1e38650413"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.256196 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4pbft" podStartSLOduration=132.256180006 podStartE2EDuration="2m12.256180006s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:27.249163532 +0000 UTC m=+153.223512577" watchObservedRunningTime="2026-01-21 14:33:27.256180006 +0000 UTC m=+153.230529051" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.270026 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" event={"ID":"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0","Type":"ContainerStarted","Data":"06ff54c098b9a4b91dcba9432ccda9554b0ca2ff3172e1fd2c288224300e2693"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.292917 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lgkd6" event={"ID":"168c6706-629a-4de3-9010-4a6ad7fb1f60","Type":"ContainerStarted","Data":"db01c8a67dfe97511ffb7d2d245399e545413df811dea1ebe77ed9c627f3de75"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.317189 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.318123 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.818109522 +0000 UTC m=+153.792458567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.338027 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" event={"ID":"b9ebfb6e-89ec-4564-b488-08c666fb91af","Type":"ContainerStarted","Data":"6147348ad017310782a5f112247e86128b68f23c1e7edaa08a45556a055fa7ec"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.387456 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" event={"ID":"deaf8010-1fc2-41b3-b94b-6339c4846f32","Type":"ContainerStarted","Data":"81e516da55a1f8a4947afc7bdf1f26930d5493fa2f08640071128fc8729f4ad1"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.387519 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" event={"ID":"deaf8010-1fc2-41b3-b94b-6339c4846f32","Type":"ContainerStarted","Data":"b83d6bdfe9d0de2460c631c1ca9805c4ed7caaf5deaea19b6b828dce16755c5b"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.420181 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc" event={"ID":"5b5315e3-10a5-4388-9e91-69e5e64bd718","Type":"ContainerStarted","Data":"f5babe0ecde247b15e729a73bc9350c74b34296549c1f844ecc4d382ec2edcc1"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.421263 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.422550 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:27.922530301 +0000 UTC m=+153.896879356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.453044 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lb7gq" podStartSLOduration=132.453027339 podStartE2EDuration="2m12.453027339s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:27.293502953 +0000 UTC m=+153.267851998" watchObservedRunningTime="2026-01-21 14:33:27.453027339 +0000 UTC m=+153.427376384" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.453241 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4r2rw" podStartSLOduration=132.453237276 podStartE2EDuration="2m12.453237276s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:27.452273754 +0000 UTC m=+153.426622799" watchObservedRunningTime="2026-01-21 14:33:27.453237276 +0000 UTC m=+153.427586321" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.500856 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" event={"ID":"d64b1f50-155f-44f0-b9ba-90e1e59fc1ce","Type":"ContainerStarted","Data":"bb9a05fb77e8235f464dc8255843b19ae9d25e224735ec7edc6091a5de5706b3"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.522739 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.523092 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.023079905 +0000 UTC m=+153.997428950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.537986 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9v8ms" event={"ID":"a02eb7a9-1d91-4b89-8b19-6dc75f83e2d9","Type":"ContainerStarted","Data":"56b4c00343dd67f30098eecdf4d0c514d36e77e490163f1259548bafaf2e6bbb"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.550467 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2dwbc" podStartSLOduration=132.550452813 podStartE2EDuration="2m12.550452813s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:27.548758489 +0000 UTC m=+153.523107544" watchObservedRunningTime="2026-01-21 14:33:27.550452813 +0000 UTC m=+153.524801858" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.551853 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:27 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:27 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:27 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.551896 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.577757 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" event={"ID":"7908fe78-158f-45d2-9e1e-2357a6f9cd42","Type":"ContainerStarted","Data":"8f41dd0a2bd7f32df89722d832097338f03350a2af914bfe5583aa831300107c"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.577804 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" event={"ID":"7908fe78-158f-45d2-9e1e-2357a6f9cd42","Type":"ContainerStarted","Data":"dc0d919194ac45d20d7c0a966ec5bbf7f07ac5d9101f707442c53cd71082adac"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.606984 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" event={"ID":"9f95091b-8acb-473d-ac10-c27bcb7e856e","Type":"ContainerStarted","Data":"2720d16fc0fbb851f424d08500dd674b20f6be46fac873a4ad1321142037dbe0"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.608021 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.625480 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.626035 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.126020946 +0000 UTC m=+154.100369991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.627363 4834 generic.go:334] "Generic (PLEG): container finished" podID="ca0f8daf-3b69-4d64-9292-ad129dc27a0f" containerID="330a86d3c16e6f2e46285867db313c7fbc75ffabb0985defc587991402e3083e" exitCode=0 Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.627436 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" event={"ID":"ca0f8daf-3b69-4d64-9292-ad129dc27a0f","Type":"ContainerDied","Data":"330a86d3c16e6f2e46285867db313c7fbc75ffabb0985defc587991402e3083e"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.633032 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.652254 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" event={"ID":"9637e38c-b666-480c-a92a-71b40d1a41d0","Type":"ContainerStarted","Data":"f109f617afc962e8a0e87ccee3dc463056da1e8d7c22d7666bdaf5e2d4fa9602"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.654432 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" event={"ID":"32b16913-b72e-4e5f-b684-913111a08bd7","Type":"ContainerStarted","Data":"c4ca5555f7dbe4c98b90531c8129fe529008dd9f4fcec2f25dbaa46ee21610c0"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.660889 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9v8ms" podStartSLOduration=7.660871214 podStartE2EDuration="7.660871214s" podCreationTimestamp="2026-01-21 14:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:27.658719984 +0000 UTC m=+153.633069029" watchObservedRunningTime="2026-01-21 14:33:27.660871214 +0000 UTC m=+153.635220259" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.684125 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d2dk9" event={"ID":"428c9d34-0ff9-4979-8a9b-ae8171b73a20","Type":"ContainerStarted","Data":"e7f98e29bb102bb2590d55770d518b5a70be1ff3324a011675f492ba66d1388c"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.704782 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" event={"ID":"9d684d88-9233-4a40-b9ca-4c393f2d7939","Type":"ContainerStarted","Data":"428d2890c4c4c09575b11a8f7abb44d447e7a940c18c26f3d11b77eea299ffdc"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.731336 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.732658 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.232640205 +0000 UTC m=+154.206989250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.733514 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" event={"ID":"741cdc21-96df-4d88-a05e-d877ea76aa87","Type":"ContainerStarted","Data":"5b23ba6dcb559b9e2ed9c5428e6a366ed81d5b14a45d3fb44356fb3e19937216"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.793284 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5596r" podStartSLOduration=132.793265009 podStartE2EDuration="2m12.793265009s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:27.780138858 +0000 UTC m=+153.754487913" watchObservedRunningTime="2026-01-21 14:33:27.793265009 +0000 UTC m=+153.767614054" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.811467 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" event={"ID":"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb","Type":"ContainerStarted","Data":"84f7647375ef9f244303811bc2dcd458309e874cb5d7953043006ab5cba09241"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.811505 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" event={"ID":"5c8ce07a-fac7-43b4-88f3-4e43da0c75bb","Type":"ContainerStarted","Data":"43072713da91599efd8cd27f2ba5e91f9a858d804cdb5251c167c281fde45bbd"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.836092 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.837134 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.337120856 +0000 UTC m=+154.311469891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.851098 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" event={"ID":"62b731df-5898-4def-baa5-bc423a8c542b","Type":"ContainerStarted","Data":"8beb4dbb7efd736088a7cb5726e20df4f57767321738ad890cb174ec0aff23ff"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.878185 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6l6w6" podStartSLOduration=133.878167012 podStartE2EDuration="2m13.878167012s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:27.8771948 +0000 UTC m=+153.851543865" watchObservedRunningTime="2026-01-21 14:33:27.878167012 +0000 UTC m=+153.852516047" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.886189 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" event={"ID":"57d9e464-9d27-44b1-bace-6e559dba046f","Type":"ContainerStarted","Data":"d0bf34a8be435e414f4c2aa5aaea1865122499ff8a82db78fa914d84fc1de1a3"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.886233 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" event={"ID":"57d9e464-9d27-44b1-bace-6e559dba046f","Type":"ContainerStarted","Data":"d5ffd70f9bce9a4f49c43a3cc7c58d3ba2bacdd64dbcf87c84a1fe33614d8484"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.938383 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:27 crc kubenswrapper[4834]: E0121 14:33:27.939904 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.439890941 +0000 UTC m=+154.414239986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.941728 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" event={"ID":"a3e0c585-701a-4ec9-a901-88877d73a876","Type":"ContainerStarted","Data":"7cab01f1499ef8126e48eef00ed31a8bf4dfb01172db792cdc6a8fc449c076cd"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.941764 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" event={"ID":"a3e0c585-701a-4ec9-a901-88877d73a876","Type":"ContainerStarted","Data":"79098bf4daabdf7b2399234aae4d906a89b6fe0fb5aa3d42ad10c4589ff78af1"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.942250 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.970066 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" event={"ID":"be302830-09db-4bde-8611-08739d8dff31","Type":"ContainerStarted","Data":"1b53b3f05f12eaefc65a9ce2ee6d0b771bed78c716d774c337df766bfc093c6f"} Jan 21 14:33:27 crc kubenswrapper[4834]: I0121 14:33:27.970107 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.007255 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" event={"ID":"da7838db-42fa-496d-bdec-712d5fcc46c6","Type":"ContainerStarted","Data":"9304d7afc79f0e1878eeb5d03d2d81740852dbd4b9bb0f68647536a0c791773c"} Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.008261 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.025126 4834 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v9x5s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.025184 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" podUID="da7838db-42fa-496d-bdec-712d5fcc46c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.025610 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" podStartSLOduration=133.025598729 podStartE2EDuration="2m13.025598729s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:27.965012057 +0000 UTC m=+153.939361102" watchObservedRunningTime="2026-01-21 14:33:28.025598729 +0000 UTC m=+153.999947774" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.044902 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.046170 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.546149219 +0000 UTC m=+154.520498264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.080920 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" event={"ID":"f164c103-b8ff-419d-bad2-6b17381ffee1","Type":"ContainerStarted","Data":"42236b3213db8b764ed01af351ed8b46412667a40466b06d2b7fbada3e21df16"} Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.129716 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" event={"ID":"0d4953e7-9186-4e62-a45c-fffa78bba767","Type":"ContainerStarted","Data":"bd3075f6423f66aa16dc44547f769e0c5d61af29c73c3993b9341fa67dce66d9"} Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.133958 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-wl7f4" podStartSLOduration=133.133941304 podStartE2EDuration="2m13.133941304s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:28.044598419 +0000 UTC m=+154.018947464" watchObservedRunningTime="2026-01-21 14:33:28.133941304 +0000 UTC m=+154.108290349" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.146212 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.148157 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.64813947 +0000 UTC m=+154.622488515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.151619 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxjcg" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.199859 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v5kgf" podStartSLOduration=133.199844217 podStartE2EDuration="2m13.199844217s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:28.13568021 +0000 UTC m=+154.110029255" watchObservedRunningTime="2026-01-21 14:33:28.199844217 +0000 UTC m=+154.174193262" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.212916 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.247097 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.248995 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.748969833 +0000 UTC m=+154.723318938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.260482 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" podStartSLOduration=134.260464851 podStartE2EDuration="2m14.260464851s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:28.259452199 +0000 UTC m=+154.233801244" watchObservedRunningTime="2026-01-21 14:33:28.260464851 +0000 UTC m=+154.234813896" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.327463 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gsfns" podStartSLOduration=133.32744566 podStartE2EDuration="2m13.32744566s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:28.326446887 +0000 UTC m=+154.300795932" watchObservedRunningTime="2026-01-21 14:33:28.32744566 +0000 UTC m=+154.301794695" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.350515 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.351039 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.851026765 +0000 UTC m=+154.825375810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.360416 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" podStartSLOduration=133.360400096 podStartE2EDuration="2m13.360400096s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:28.358431352 +0000 UTC m=+154.332780397" watchObservedRunningTime="2026-01-21 14:33:28.360400096 +0000 UTC m=+154.334749141" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.451676 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.452001 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:28.951986323 +0000 UTC m=+154.926335368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.473728 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" podStartSLOduration=133.473707529 podStartE2EDuration="2m13.473707529s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:28.431654991 +0000 UTC m=+154.406004036" watchObservedRunningTime="2026-01-21 14:33:28.473707529 +0000 UTC m=+154.448056574" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.499854 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8sn2t" podStartSLOduration=133.499833037 podStartE2EDuration="2m13.499833037s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:28.499015641 +0000 UTC m=+154.473364686" watchObservedRunningTime="2026-01-21 14:33:28.499833037 +0000 UTC m=+154.474182072" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.501089 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" podStartSLOduration=133.501082417 podStartE2EDuration="2m13.501082417s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:28.480336862 +0000 UTC m=+154.454685907" watchObservedRunningTime="2026-01-21 14:33:28.501082417 +0000 UTC m=+154.475431452" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.514591 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" podStartSLOduration=134.51457623 podStartE2EDuration="2m14.51457623s" podCreationTimestamp="2026-01-21 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:28.513070951 +0000 UTC m=+154.487419996" watchObservedRunningTime="2026-01-21 14:33:28.51457623 +0000 UTC m=+154.488925275" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.540664 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:28 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:28 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:28 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.540733 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.553232 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.553661 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.053642253 +0000 UTC m=+155.027991298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.654209 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.654409 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.154380253 +0000 UTC m=+155.128729368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.756017 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.756299 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.256287591 +0000 UTC m=+155.230636636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.857343 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.857492 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.357460565 +0000 UTC m=+155.331809610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.857608 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.857864 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.357856158 +0000 UTC m=+155.332205203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:28 crc kubenswrapper[4834]: I0121 14:33:28.958420 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:28 crc kubenswrapper[4834]: E0121 14:33:28.958802 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.458786705 +0000 UTC m=+155.433135750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.021900 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9v4tg" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.061136 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.061557 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.561539789 +0000 UTC m=+155.535888834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.155756 4834 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7thnr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.156450 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" podUID="be302830-09db-4bde-8611-08739d8dff31" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.156613 4834 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7thnr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.156723 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" podUID="be302830-09db-4bde-8611-08739d8dff31" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.165065 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.165446 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.665426351 +0000 UTC m=+155.639775396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.190585 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" event={"ID":"9637e38c-b666-480c-a92a-71b40d1a41d0","Type":"ContainerStarted","Data":"530775692cc4c190a4843c7977508e84f472e1443ea322fc4c8e18c94e939faf"} Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.202877 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" event={"ID":"32b16913-b72e-4e5f-b684-913111a08bd7","Type":"ContainerStarted","Data":"d0336034aebf30fdc4f2b6e6c7ec93c18ddf0ae13b4b47bb37663446668fc525"} Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.218738 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" event={"ID":"11505903-40a1-4bd2-94c6-b8abbf15225e","Type":"ContainerStarted","Data":"eb877dd301f92c09078ee7ef39311131971d5569a91d71c0108b8f98abe3130f"} Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.224440 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gg52f" event={"ID":"62b731df-5898-4def-baa5-bc423a8c542b","Type":"ContainerStarted","Data":"bbcc2ce9252105d052276b7646681f40e1da5525f4eeb70a4761e5db39dab3f4"} Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.234943 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" event={"ID":"deaf8010-1fc2-41b3-b94b-6339c4846f32","Type":"ContainerStarted","Data":"5bd76565a9c48839abc813d2408465dfb566868c19806ddc35f5b87d095b0c9a"} Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.246834 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bk949" event={"ID":"468c7958-916e-4d52-bd3b-d8eeeaa09172","Type":"ContainerStarted","Data":"90dabdca86928102abb16ba93dab8a01c1a7e856f8e9af5a2f1d62f44a53c38c"} Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.247172 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bk949" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.264367 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-g4j7q" event={"ID":"741cdc21-96df-4d88-a05e-d877ea76aa87","Type":"ContainerStarted","Data":"34072b39a5113a6ec400fbf39e913f1b1bc48502937b0e4df429b0983e46ec52"} Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.266156 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.266818 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.766799272 +0000 UTC m=+155.741148307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.285843 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" event={"ID":"ca0f8daf-3b69-4d64-9292-ad129dc27a0f","Type":"ContainerStarted","Data":"82768e5b597d787f8026a6877a8bfd9702d6e6ebf8282f83d70dbafdd872ddb2"} Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.297613 4834 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v9x5s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.297665 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" podUID="da7838db-42fa-496d-bdec-712d5fcc46c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.358344 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7thnr" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.367429 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.369301 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.869284298 +0000 UTC m=+155.843633343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.375373 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fl4gd" podStartSLOduration=134.375359903 podStartE2EDuration="2m14.375359903s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:29.374346741 +0000 UTC m=+155.348695786" watchObservedRunningTime="2026-01-21 14:33:29.375359903 +0000 UTC m=+155.349708948" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.471707 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.472030 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:29.972018973 +0000 UTC m=+155.946368018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.534742 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:29 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:29 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:29 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.534819 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.572948 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.573301 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:30.073286481 +0000 UTC m=+156.047635526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.673989 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.674330 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:30.174313729 +0000 UTC m=+156.148662774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.699596 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwcqj" podStartSLOduration=134.699574229 podStartE2EDuration="2m14.699574229s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:29.58762194 +0000 UTC m=+155.561970985" watchObservedRunningTime="2026-01-21 14:33:29.699574229 +0000 UTC m=+155.673923274" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.700535 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-769ws"] Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.701449 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: W0121 14:33:29.706633 4834 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.706713 4834 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.717030 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" podStartSLOduration=134.717008299 podStartE2EDuration="2m14.717008299s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:29.709528879 +0000 UTC m=+155.683877934" watchObservedRunningTime="2026-01-21 14:33:29.717008299 +0000 UTC m=+155.691357344" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.735908 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ltsnd"] Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.737206 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.741542 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.741651 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-769ws"] Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.761699 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltsnd"] Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.775635 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.775770 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-utilities\") pod \"community-operators-769ws\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.775800 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-catalog-content\") pod \"community-operators-769ws\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.775819 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk5bg\" (UniqueName: \"kubernetes.io/projected/f245cd57-57f1-40a7-b0c5-edb85e06871d-kube-api-access-nk5bg\") pod \"community-operators-769ws\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.775995 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:30.27597942 +0000 UTC m=+156.250328465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.776434 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bk949" podStartSLOduration=9.776415803999999 podStartE2EDuration="9.776415804s" podCreationTimestamp="2026-01-21 14:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:29.776202977 +0000 UTC m=+155.750552022" watchObservedRunningTime="2026-01-21 14:33:29.776415804 +0000 UTC m=+155.750764849" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.823555 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-228c4" podStartSLOduration=134.823536135 podStartE2EDuration="2m14.823536135s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:29.818265976 +0000 UTC m=+155.792615021" watchObservedRunningTime="2026-01-21 14:33:29.823536135 +0000 UTC m=+155.797885180" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.879151 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-catalog-content\") pod \"certified-operators-ltsnd\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.879241 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-utilities\") pod \"community-operators-769ws\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.879282 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-catalog-content\") pod \"community-operators-769ws\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.879298 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk5bg\" (UniqueName: \"kubernetes.io/projected/f245cd57-57f1-40a7-b0c5-edb85e06871d-kube-api-access-nk5bg\") pod \"community-operators-769ws\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.879340 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-utilities\") pod \"certified-operators-ltsnd\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.879356 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzx66\" (UniqueName: \"kubernetes.io/projected/db774f6a-d370-4725-a77d-35da37c572d1-kube-api-access-jzx66\") pod \"certified-operators-ltsnd\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.879382 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.879702 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:30.379691016 +0000 UTC m=+156.354040061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.881090 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-utilities\") pod \"community-operators-769ws\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.881386 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-catalog-content\") pod \"community-operators-769ws\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.919418 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4hlmg"] Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.920391 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.928938 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk5bg\" (UniqueName: \"kubernetes.io/projected/f245cd57-57f1-40a7-b0c5-edb85e06871d-kube-api-access-nk5bg\") pod \"community-operators-769ws\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.973550 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hlmg"] Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.984508 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.984734 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-utilities\") pod \"community-operators-4hlmg\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.984771 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-catalog-content\") pod \"certified-operators-ltsnd\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.984814 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-catalog-content\") pod \"community-operators-4hlmg\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.984905 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48bjt\" (UniqueName: \"kubernetes.io/projected/8088483c-12ae-4825-a95a-42bec2973b76-kube-api-access-48bjt\") pod \"community-operators-4hlmg\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.984972 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-utilities\") pod \"certified-operators-ltsnd\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.984991 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzx66\" (UniqueName: \"kubernetes.io/projected/db774f6a-d370-4725-a77d-35da37c572d1-kube-api-access-jzx66\") pod \"certified-operators-ltsnd\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:29 crc kubenswrapper[4834]: E0121 14:33:29.985498 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:30.485451557 +0000 UTC m=+156.459800622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.986140 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-catalog-content\") pod \"certified-operators-ltsnd\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:29 crc kubenswrapper[4834]: I0121 14:33:29.988378 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-utilities\") pod \"certified-operators-ltsnd\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.026684 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzx66\" (UniqueName: \"kubernetes.io/projected/db774f6a-d370-4725-a77d-35da37c572d1-kube-api-access-jzx66\") pod \"certified-operators-ltsnd\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.062570 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.093076 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48bjt\" (UniqueName: \"kubernetes.io/projected/8088483c-12ae-4825-a95a-42bec2973b76-kube-api-access-48bjt\") pod \"community-operators-4hlmg\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.093131 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.093164 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-utilities\") pod \"community-operators-4hlmg\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.093186 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-catalog-content\") pod \"community-operators-4hlmg\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.093626 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-catalog-content\") pod \"community-operators-4hlmg\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:30 crc kubenswrapper[4834]: E0121 14:33:30.093682 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:30.593658478 +0000 UTC m=+156.568007593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.093839 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-utilities\") pod \"community-operators-4hlmg\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.097597 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzgrw"] Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.098636 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.172493 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48bjt\" (UniqueName: \"kubernetes.io/projected/8088483c-12ae-4825-a95a-42bec2973b76-kube-api-access-48bjt\") pod \"community-operators-4hlmg\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.178739 4834 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.193797 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.194098 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nbww\" (UniqueName: \"kubernetes.io/projected/7708819f-d97a-47a7-b9a6-51fe7a7f503f-kube-api-access-6nbww\") pod \"certified-operators-hzgrw\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.194277 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-catalog-content\") pod \"certified-operators-hzgrw\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.194414 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-utilities\") pod \"certified-operators-hzgrw\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: E0121 14:33:30.194588 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:30.694572614 +0000 UTC m=+156.668921659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.219571 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzgrw"] Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.295896 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-catalog-content\") pod \"certified-operators-hzgrw\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.295972 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-utilities\") pod \"certified-operators-hzgrw\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.296063 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nbww\" (UniqueName: \"kubernetes.io/projected/7708819f-d97a-47a7-b9a6-51fe7a7f503f-kube-api-access-6nbww\") pod \"certified-operators-hzgrw\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.296119 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.296796 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-utilities\") pod \"certified-operators-hzgrw\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.296916 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-catalog-content\") pod \"certified-operators-hzgrw\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: E0121 14:33:30.297038 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:30.797025799 +0000 UTC m=+156.771374844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.337085 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nbww\" (UniqueName: \"kubernetes.io/projected/7708819f-d97a-47a7-b9a6-51fe7a7f503f-kube-api-access-6nbww\") pod \"certified-operators-hzgrw\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.345491 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" event={"ID":"9637e38c-b666-480c-a92a-71b40d1a41d0","Type":"ContainerStarted","Data":"354bf1ab4572f996b031704da2e181a46a9e9b17cd95422c7fe37b2c76d4cef5"} Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.348577 4834 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v9x5s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.348801 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" podUID="da7838db-42fa-496d-bdec-712d5fcc46c6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.398349 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:30 crc kubenswrapper[4834]: E0121 14:33:30.398699 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:30.898684819 +0000 UTC m=+156.873033864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.498360 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.499894 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:30 crc kubenswrapper[4834]: E0121 14:33:30.507660 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:31.007646993 +0000 UTC m=+156.981996038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.563818 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:30 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:30 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:30 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.563866 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.608581 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:30 crc kubenswrapper[4834]: E0121 14:33:30.608725 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:31.108699334 +0000 UTC m=+157.083048379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.608765 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:30 crc kubenswrapper[4834]: E0121 14:33:30.609169 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 14:33:31.109156107 +0000 UTC m=+157.083505142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qg6w9" (UID: "98c36cc5-0276-4002-943b-030fb686cae6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.652286 4834 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T14:33:30.178950022Z","Handler":null,"Name":""} Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.709380 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:30 crc kubenswrapper[4834]: E0121 14:33:30.709695 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 14:33:31.209681942 +0000 UTC m=+157.184030987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.712786 4834 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.712820 4834 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.811083 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.812456 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltsnd"] Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.836659 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.836698 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.931632 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.942206 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:33:30 crc kubenswrapper[4834]: I0121 14:33:30.942737 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-769ws" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.179467 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.179885 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.211781 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qg6w9\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.238829 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.268390 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.286126 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.314650 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzgrw"] Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.386258 4834 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hjnhd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]log ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]etcd ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/max-in-flight-filter ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 14:33:31 crc kubenswrapper[4834]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 14:33:31 crc kubenswrapper[4834]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-startinformers ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 14:33:31 crc kubenswrapper[4834]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 14:33:31 crc kubenswrapper[4834]: livez check failed Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.386310 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" podUID="5c8ce07a-fac7-43b4-88f3-4e43da0c75bb" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.410629 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.411836 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.417392 4834 generic.go:334] "Generic (PLEG): container finished" podID="98ce7851-62b5-4cb5-b7d4-2e03f1606cb0" containerID="06ff54c098b9a4b91dcba9432ccda9554b0ca2ff3172e1fd2c288224300e2693" exitCode=0 Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.417473 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" event={"ID":"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0","Type":"ContainerDied","Data":"06ff54c098b9a4b91dcba9432ccda9554b0ca2ff3172e1fd2c288224300e2693"} Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.419189 4834 patch_prober.go:28] interesting pod/console-f9d7485db-vzwpb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.419238 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vzwpb" podUID="92411afe-95fe-481a-ac22-4a411f4ff7f3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.436201 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltsnd" event={"ID":"db774f6a-d370-4725-a77d-35da37c572d1","Type":"ContainerStarted","Data":"c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389"} Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.436245 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltsnd" event={"ID":"db774f6a-d370-4725-a77d-35da37c572d1","Type":"ContainerStarted","Data":"a6d984d4aa9eb5e80426d6f99ea2681a896b25399007fa8412b73f9960d10701"} Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.464325 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" event={"ID":"9637e38c-b666-480c-a92a-71b40d1a41d0","Type":"ContainerStarted","Data":"4b61c63186ce18dec0b2d42a963e038bf138c78016c0835320a26b6df17cb244"} Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.534691 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n466t"] Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.537497 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.552341 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.569594 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n466t"] Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.570436 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" podStartSLOduration=11.570421473 podStartE2EDuration="11.570421473s" podCreationTimestamp="2026-01-21 14:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:31.564829704 +0000 UTC m=+157.539178749" watchObservedRunningTime="2026-01-21 14:33:31.570421473 +0000 UTC m=+157.544770518" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.580139 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:31 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:31 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:31 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.580187 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.623795 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.624643 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.629158 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.629270 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.639717 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.659525 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwq7g\" (UniqueName: \"kubernetes.io/projected/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-kube-api-access-mwq7g\") pod \"redhat-marketplace-n466t\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.659580 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-utilities\") pod \"redhat-marketplace-n466t\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.659623 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-catalog-content\") pod \"redhat-marketplace-n466t\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.763555 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-utilities\") pod \"redhat-marketplace-n466t\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.763620 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.763651 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.763685 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-catalog-content\") pod \"redhat-marketplace-n466t\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.763762 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwq7g\" (UniqueName: \"kubernetes.io/projected/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-kube-api-access-mwq7g\") pod \"redhat-marketplace-n466t\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.764466 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-catalog-content\") pod \"redhat-marketplace-n466t\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.764530 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-utilities\") pod \"redhat-marketplace-n466t\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.808258 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwq7g\" (UniqueName: \"kubernetes.io/projected/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-kube-api-access-mwq7g\") pod \"redhat-marketplace-n466t\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.843414 4834 patch_prober.go:28] interesting pod/downloads-7954f5f757-5hf5s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.843466 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5hf5s" podUID="51da62a2-0544-4231-8ab0-0b452ff8d2af" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.843497 4834 patch_prober.go:28] interesting pod/downloads-7954f5f757-5hf5s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.843560 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5hf5s" podUID="51da62a2-0544-4231-8ab0-0b452ff8d2af" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.864716 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.864773 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.864905 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.908833 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.928279 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.929321 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dsddw"] Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.930351 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.951565 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:31 crc kubenswrapper[4834]: I0121 14:33:31.956667 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsddw"] Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.029733 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-769ws"] Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.069152 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hlmg"] Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.076181 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-utilities\") pod \"redhat-marketplace-dsddw\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.076241 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-catalog-content\") pod \"redhat-marketplace-dsddw\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.076271 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x4sh\" (UniqueName: \"kubernetes.io/projected/67c225e2-f31a-4572-814e-804233b6c1fd-kube-api-access-4x4sh\") pod \"redhat-marketplace-dsddw\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.177149 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-utilities\") pod \"redhat-marketplace-dsddw\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.177194 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-catalog-content\") pod \"redhat-marketplace-dsddw\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.177225 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x4sh\" (UniqueName: \"kubernetes.io/projected/67c225e2-f31a-4572-814e-804233b6c1fd-kube-api-access-4x4sh\") pod \"redhat-marketplace-dsddw\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.177895 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-utilities\") pod \"redhat-marketplace-dsddw\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.178213 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-catalog-content\") pod \"redhat-marketplace-dsddw\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.185337 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qg6w9"] Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.211427 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x4sh\" (UniqueName: \"kubernetes.io/projected/67c225e2-f31a-4572-814e-804233b6c1fd-kube-api-access-4x4sh\") pod \"redhat-marketplace-dsddw\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.263239 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.334899 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.393610 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n466t"] Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.473395 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.477305 4834 generic.go:334] "Generic (PLEG): container finished" podID="db774f6a-d370-4725-a77d-35da37c572d1" containerID="c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389" exitCode=0 Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.477369 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltsnd" event={"ID":"db774f6a-d370-4725-a77d-35da37c572d1","Type":"ContainerDied","Data":"c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389"} Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.479265 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.487053 4834 generic.go:334] "Generic (PLEG): container finished" podID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerID="21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400" exitCode=0 Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.487129 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrw" event={"ID":"7708819f-d97a-47a7-b9a6-51fe7a7f503f","Type":"ContainerDied","Data":"21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400"} Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.487154 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrw" event={"ID":"7708819f-d97a-47a7-b9a6-51fe7a7f503f","Type":"ContainerStarted","Data":"639204a681f088c6649a51dcd299d43adc02c5133a7273c49acadb44c6d46b62"} Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.492172 4834 generic.go:334] "Generic (PLEG): container finished" podID="8088483c-12ae-4825-a95a-42bec2973b76" containerID="250288771a7af048a1fc19e5cf82d253a79f2f21809311e81bca593e0b6447d2" exitCode=0 Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.492397 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hlmg" event={"ID":"8088483c-12ae-4825-a95a-42bec2973b76","Type":"ContainerDied","Data":"250288771a7af048a1fc19e5cf82d253a79f2f21809311e81bca593e0b6447d2"} Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.492437 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hlmg" event={"ID":"8088483c-12ae-4825-a95a-42bec2973b76","Type":"ContainerStarted","Data":"89dc7c5a0d8f488b99fe0e1f467ef32545a09b6b6719692d05ef23dd4621563f"} Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.507965 4834 generic.go:334] "Generic (PLEG): container finished" podID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerID="2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0" exitCode=0 Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.508219 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-769ws" event={"ID":"f245cd57-57f1-40a7-b0c5-edb85e06871d","Type":"ContainerDied","Data":"2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0"} Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.508261 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-769ws" event={"ID":"f245cd57-57f1-40a7-b0c5-edb85e06871d","Type":"ContainerStarted","Data":"0f92a4c8b56e272750a0b520d638e1744e98bc0cebd8aa67dd49f56a97a7a681"} Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.534899 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" event={"ID":"98c36cc5-0276-4002-943b-030fb686cae6","Type":"ContainerStarted","Data":"cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df"} Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.534952 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.534963 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" event={"ID":"98c36cc5-0276-4002-943b-030fb686cae6","Type":"ContainerStarted","Data":"a4ac69525bf50bd16f62e53079f503f9bde2f6c96727c4496890010c1ca77f14"} Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.535551 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.544886 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:32 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:32 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:32 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.544991 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.613573 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" podStartSLOduration=137.613550414 podStartE2EDuration="2m17.613550414s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:32.595274908 +0000 UTC m=+158.569623953" watchObservedRunningTime="2026-01-21 14:33:32.613550414 +0000 UTC m=+158.587899479" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.638312 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsddw"] Jan 21 14:33:32 crc kubenswrapper[4834]: W0121 14:33:32.640522 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c225e2_f31a_4572_814e_804233b6c1fd.slice/crio-948ac30da8c72a42326227ac1d4db168b7c941a9ebd910312f9ac130ff631ca5 WatchSource:0}: Error finding container 948ac30da8c72a42326227ac1d4db168b7c941a9ebd910312f9ac130ff631ca5: Status 404 returned error can't find the container with id 948ac30da8c72a42326227ac1d4db168b7c941a9ebd910312f9ac130ff631ca5 Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.895328 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.895711 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.901666 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8s2bs"] Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.902780 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.902966 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.904981 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.920692 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s2bs"] Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.932020 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.993196 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzbc\" (UniqueName: \"kubernetes.io/projected/bc6967c6-5420-404e-88dd-95664165decf-kube-api-access-lgzbc\") pod \"redhat-operators-8s2bs\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.993267 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-utilities\") pod \"redhat-operators-8s2bs\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:32 crc kubenswrapper[4834]: I0121 14:33:32.993322 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-catalog-content\") pod \"redhat-operators-8s2bs\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.003656 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.094654 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpstr\" (UniqueName: \"kubernetes.io/projected/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-kube-api-access-qpstr\") pod \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.094724 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-config-volume\") pod \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.094843 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-secret-volume\") pod \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\" (UID: \"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0\") " Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.095046 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-catalog-content\") pod \"redhat-operators-8s2bs\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.095166 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzbc\" (UniqueName: \"kubernetes.io/projected/bc6967c6-5420-404e-88dd-95664165decf-kube-api-access-lgzbc\") pod \"redhat-operators-8s2bs\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.095218 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-utilities\") pod \"redhat-operators-8s2bs\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.096580 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-config-volume" (OuterVolumeSpecName: "config-volume") pod "98ce7851-62b5-4cb5-b7d4-2e03f1606cb0" (UID: "98ce7851-62b5-4cb5-b7d4-2e03f1606cb0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.097496 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-utilities\") pod \"redhat-operators-8s2bs\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.097568 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-catalog-content\") pod \"redhat-operators-8s2bs\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.110055 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-kube-api-access-qpstr" (OuterVolumeSpecName: "kube-api-access-qpstr") pod "98ce7851-62b5-4cb5-b7d4-2e03f1606cb0" (UID: "98ce7851-62b5-4cb5-b7d4-2e03f1606cb0"). InnerVolumeSpecName "kube-api-access-qpstr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.110103 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "98ce7851-62b5-4cb5-b7d4-2e03f1606cb0" (UID: "98ce7851-62b5-4cb5-b7d4-2e03f1606cb0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.112719 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzbc\" (UniqueName: \"kubernetes.io/projected/bc6967c6-5420-404e-88dd-95664165decf-kube-api-access-lgzbc\") pod \"redhat-operators-8s2bs\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.196896 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpstr\" (UniqueName: \"kubernetes.io/projected/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-kube-api-access-qpstr\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.197378 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.197393 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.250032 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.312606 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bglrk"] Jan 21 14:33:33 crc kubenswrapper[4834]: E0121 14:33:33.312966 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ce7851-62b5-4cb5-b7d4-2e03f1606cb0" containerName="collect-profiles" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.312982 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ce7851-62b5-4cb5-b7d4-2e03f1606cb0" containerName="collect-profiles" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.313112 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ce7851-62b5-4cb5-b7d4-2e03f1606cb0" containerName="collect-profiles" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.314241 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.317514 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bglrk"] Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.408059 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-catalog-content\") pod \"redhat-operators-bglrk\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.408124 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-utilities\") pod \"redhat-operators-bglrk\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.408469 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddcbn\" (UniqueName: \"kubernetes.io/projected/eddf9593-c223-4b82-9774-6059413ae2d0-kube-api-access-ddcbn\") pod \"redhat-operators-bglrk\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.478380 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s2bs"] Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.509400 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddcbn\" (UniqueName: \"kubernetes.io/projected/eddf9593-c223-4b82-9774-6059413ae2d0-kube-api-access-ddcbn\") pod \"redhat-operators-bglrk\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.509468 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-catalog-content\") pod \"redhat-operators-bglrk\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.509500 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-utilities\") pod \"redhat-operators-bglrk\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.509948 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-utilities\") pod \"redhat-operators-bglrk\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.510453 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-catalog-content\") pod \"redhat-operators-bglrk\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.527677 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddcbn\" (UniqueName: \"kubernetes.io/projected/eddf9593-c223-4b82-9774-6059413ae2d0-kube-api-access-ddcbn\") pod \"redhat-operators-bglrk\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.536051 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:33 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:33 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:33 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.536101 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.544632 4834 generic.go:334] "Generic (PLEG): container finished" podID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerID="926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955" exitCode=0 Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.544696 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n466t" event={"ID":"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb","Type":"ContainerDied","Data":"926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955"} Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.544726 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n466t" event={"ID":"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb","Type":"ContainerStarted","Data":"a529d03002b415dc00b6736bfdcfd96365a51922ed78b276a61507d85458c3fa"} Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.551382 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2bs" event={"ID":"bc6967c6-5420-404e-88dd-95664165decf","Type":"ContainerStarted","Data":"186bc3d1fdcaaeb50660068edf5813d24bb57e8e1d8742b1f32673d32f808abf"} Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.570728 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d6f3c113-715c-4bdf-ae3c-805dc8cea73c","Type":"ContainerStarted","Data":"425b9596d8112852bd60d6e07b7bd6615a8a00a2c6fb19c9c780b3170e2777ee"} Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.570776 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d6f3c113-715c-4bdf-ae3c-805dc8cea73c","Type":"ContainerStarted","Data":"109f83ea0e6c827611609f61d5de9c06dfe04fe8c5c45fd5c1762ca16e4c84db"} Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.572437 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" event={"ID":"98ce7851-62b5-4cb5-b7d4-2e03f1606cb0","Type":"ContainerDied","Data":"6a37c69ed35765442c8056abd30ee87a1e631972922af867ea5aa54d5c423c94"} Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.572464 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a37c69ed35765442c8056abd30ee87a1e631972922af867ea5aa54d5c423c94" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.572506 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.576153 4834 generic.go:334] "Generic (PLEG): container finished" podID="67c225e2-f31a-4572-814e-804233b6c1fd" containerID="d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285" exitCode=0 Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.577605 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsddw" event={"ID":"67c225e2-f31a-4572-814e-804233b6c1fd","Type":"ContainerDied","Data":"d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285"} Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.577640 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsddw" event={"ID":"67c225e2-f31a-4572-814e-804233b6c1fd","Type":"ContainerStarted","Data":"948ac30da8c72a42326227ac1d4db168b7c941a9ebd910312f9ac130ff631ca5"} Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.585004 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bfxgg" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.592915 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.5928931090000003 podStartE2EDuration="2.592893109s" podCreationTimestamp="2026-01-21 14:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:33.589728878 +0000 UTC m=+159.564077923" watchObservedRunningTime="2026-01-21 14:33:33.592893109 +0000 UTC m=+159.567242154" Jan 21 14:33:33 crc kubenswrapper[4834]: I0121 14:33:33.635742 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:33:34 crc kubenswrapper[4834]: I0121 14:33:34.018524 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bglrk"] Jan 21 14:33:34 crc kubenswrapper[4834]: W0121 14:33:34.024253 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeddf9593_c223_4b82_9774_6059413ae2d0.slice/crio-200055577f91a865651ef5347133df20487a4829e6a8a4ca8e638d6580fc713e WatchSource:0}: Error finding container 200055577f91a865651ef5347133df20487a4829e6a8a4ca8e638d6580fc713e: Status 404 returned error can't find the container with id 200055577f91a865651ef5347133df20487a4829e6a8a4ca8e638d6580fc713e Jan 21 14:33:34 crc kubenswrapper[4834]: I0121 14:33:34.537611 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:34 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:34 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:34 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:34 crc kubenswrapper[4834]: I0121 14:33:34.537686 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:34 crc kubenswrapper[4834]: I0121 14:33:34.589689 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bglrk" event={"ID":"eddf9593-c223-4b82-9774-6059413ae2d0","Type":"ContainerStarted","Data":"200055577f91a865651ef5347133df20487a4829e6a8a4ca8e638d6580fc713e"} Jan 21 14:33:34 crc kubenswrapper[4834]: I0121 14:33:34.591293 4834 generic.go:334] "Generic (PLEG): container finished" podID="bc6967c6-5420-404e-88dd-95664165decf" containerID="32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3" exitCode=0 Jan 21 14:33:34 crc kubenswrapper[4834]: I0121 14:33:34.592216 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2bs" event={"ID":"bc6967c6-5420-404e-88dd-95664165decf","Type":"ContainerDied","Data":"32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3"} Jan 21 14:33:35 crc kubenswrapper[4834]: I0121 14:33:35.538537 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:35 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:35 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:35 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:35 crc kubenswrapper[4834]: I0121 14:33:35.538883 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:35 crc kubenswrapper[4834]: I0121 14:33:35.609459 4834 generic.go:334] "Generic (PLEG): container finished" podID="eddf9593-c223-4b82-9774-6059413ae2d0" containerID="c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2" exitCode=0 Jan 21 14:33:35 crc kubenswrapper[4834]: I0121 14:33:35.609544 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bglrk" event={"ID":"eddf9593-c223-4b82-9774-6059413ae2d0","Type":"ContainerDied","Data":"c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2"} Jan 21 14:33:35 crc kubenswrapper[4834]: I0121 14:33:35.613120 4834 generic.go:334] "Generic (PLEG): container finished" podID="d6f3c113-715c-4bdf-ae3c-805dc8cea73c" containerID="425b9596d8112852bd60d6e07b7bd6615a8a00a2c6fb19c9c780b3170e2777ee" exitCode=0 Jan 21 14:33:35 crc kubenswrapper[4834]: I0121 14:33:35.613180 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d6f3c113-715c-4bdf-ae3c-805dc8cea73c","Type":"ContainerDied","Data":"425b9596d8112852bd60d6e07b7bd6615a8a00a2c6fb19c9c780b3170e2777ee"} Jan 21 14:33:36 crc kubenswrapper[4834]: I0121 14:33:36.185770 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:36 crc kubenswrapper[4834]: I0121 14:33:36.189859 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hjnhd" Jan 21 14:33:36 crc kubenswrapper[4834]: I0121 14:33:36.537871 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:36 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:36 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:36 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:36 crc kubenswrapper[4834]: I0121 14:33:36.537953 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:36 crc kubenswrapper[4834]: I0121 14:33:36.995811 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.181141 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kubelet-dir\") pod \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\" (UID: \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\") " Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.181289 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kube-api-access\") pod \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\" (UID: \"d6f3c113-715c-4bdf-ae3c-805dc8cea73c\") " Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.181277 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d6f3c113-715c-4bdf-ae3c-805dc8cea73c" (UID: "d6f3c113-715c-4bdf-ae3c-805dc8cea73c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.181546 4834 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.188751 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d6f3c113-715c-4bdf-ae3c-805dc8cea73c" (UID: "d6f3c113-715c-4bdf-ae3c-805dc8cea73c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.282977 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6f3c113-715c-4bdf-ae3c-805dc8cea73c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.517456 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 14:33:37 crc kubenswrapper[4834]: E0121 14:33:37.517902 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f3c113-715c-4bdf-ae3c-805dc8cea73c" containerName="pruner" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.517915 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f3c113-715c-4bdf-ae3c-805dc8cea73c" containerName="pruner" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.518229 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f3c113-715c-4bdf-ae3c-805dc8cea73c" containerName="pruner" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.519185 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.520354 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.522018 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.523278 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.538272 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:37 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:37 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:37 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.538407 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.586125 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.590785 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d31034df-9ceb-49b0-9ad5-334dcaa28fa4-metrics-certs\") pod \"network-metrics-daemon-dtqf2\" (UID: \"d31034df-9ceb-49b0-9ad5-334dcaa28fa4\") " pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.652032 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.651993 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d6f3c113-715c-4bdf-ae3c-805dc8cea73c","Type":"ContainerDied","Data":"109f83ea0e6c827611609f61d5de9c06dfe04fe8c5c45fd5c1762ca16e4c84db"} Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.652215 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="109f83ea0e6c827611609f61d5de9c06dfe04fe8c5c45fd5c1762ca16e4c84db" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.687293 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce6e21aa-88a0-473f-9677-8e591a782b49-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ce6e21aa-88a0-473f-9677-8e591a782b49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.687385 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce6e21aa-88a0-473f-9677-8e591a782b49-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ce6e21aa-88a0-473f-9677-8e591a782b49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.789224 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce6e21aa-88a0-473f-9677-8e591a782b49-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ce6e21aa-88a0-473f-9677-8e591a782b49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.789303 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce6e21aa-88a0-473f-9677-8e591a782b49-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ce6e21aa-88a0-473f-9677-8e591a782b49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.789426 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce6e21aa-88a0-473f-9677-8e591a782b49-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ce6e21aa-88a0-473f-9677-8e591a782b49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.818597 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce6e21aa-88a0-473f-9677-8e591a782b49-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ce6e21aa-88a0-473f-9677-8e591a782b49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.856575 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtqf2" Jan 21 14:33:37 crc kubenswrapper[4834]: I0121 14:33:37.858724 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:33:38 crc kubenswrapper[4834]: I0121 14:33:38.030916 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bk949" Jan 21 14:33:38 crc kubenswrapper[4834]: I0121 14:33:38.541462 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:38 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:38 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:38 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:38 crc kubenswrapper[4834]: I0121 14:33:38.541973 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:38 crc kubenswrapper[4834]: I0121 14:33:38.576450 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dtqf2"] Jan 21 14:33:38 crc kubenswrapper[4834]: I0121 14:33:38.597059 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 14:33:38 crc kubenswrapper[4834]: W0121 14:33:38.622464 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podce6e21aa_88a0_473f_9677_8e591a782b49.slice/crio-ed46476c7d3ce3aabf86e5eb95c64a33aa870e21c89cfc154649e452af5a11a7 WatchSource:0}: Error finding container ed46476c7d3ce3aabf86e5eb95c64a33aa870e21c89cfc154649e452af5a11a7: Status 404 returned error can't find the container with id ed46476c7d3ce3aabf86e5eb95c64a33aa870e21c89cfc154649e452af5a11a7 Jan 21 14:33:38 crc kubenswrapper[4834]: I0121 14:33:38.774141 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" event={"ID":"d31034df-9ceb-49b0-9ad5-334dcaa28fa4","Type":"ContainerStarted","Data":"92d87c26344c41068878ed8f6bf8d3c598331ca6dc3096351d0d2acce0652ba8"} Jan 21 14:33:38 crc kubenswrapper[4834]: I0121 14:33:38.775317 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ce6e21aa-88a0-473f-9677-8e591a782b49","Type":"ContainerStarted","Data":"ed46476c7d3ce3aabf86e5eb95c64a33aa870e21c89cfc154649e452af5a11a7"} Jan 21 14:33:39 crc kubenswrapper[4834]: I0121 14:33:39.020105 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:33:39 crc kubenswrapper[4834]: I0121 14:33:39.547363 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:39 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:39 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:39 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:39 crc kubenswrapper[4834]: I0121 14:33:39.547423 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:39 crc kubenswrapper[4834]: I0121 14:33:39.802730 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" event={"ID":"d31034df-9ceb-49b0-9ad5-334dcaa28fa4","Type":"ContainerStarted","Data":"97196e494a303a33a4ec0004fc17041fd2a646e98421780ea6b3de001d3ea58b"} Jan 21 14:33:39 crc kubenswrapper[4834]: I0121 14:33:39.811814 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ce6e21aa-88a0-473f-9677-8e591a782b49","Type":"ContainerStarted","Data":"b8244ffca66fa6a26e70328094644c3907b2a2cdcabdfe7451ce8051a8f3acdf"} Jan 21 14:33:39 crc kubenswrapper[4834]: I0121 14:33:39.843665 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.843644205 podStartE2EDuration="2.843644205s" podCreationTimestamp="2026-01-21 14:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:39.842858709 +0000 UTC m=+165.817207764" watchObservedRunningTime="2026-01-21 14:33:39.843644205 +0000 UTC m=+165.817993250" Jan 21 14:33:40 crc kubenswrapper[4834]: I0121 14:33:40.540284 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:40 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:40 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:40 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:40 crc kubenswrapper[4834]: I0121 14:33:40.540576 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:40 crc kubenswrapper[4834]: I0121 14:33:40.823944 4834 generic.go:334] "Generic (PLEG): container finished" podID="ce6e21aa-88a0-473f-9677-8e591a782b49" containerID="b8244ffca66fa6a26e70328094644c3907b2a2cdcabdfe7451ce8051a8f3acdf" exitCode=0 Jan 21 14:33:40 crc kubenswrapper[4834]: I0121 14:33:40.824016 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ce6e21aa-88a0-473f-9677-8e591a782b49","Type":"ContainerDied","Data":"b8244ffca66fa6a26e70328094644c3907b2a2cdcabdfe7451ce8051a8f3acdf"} Jan 21 14:33:40 crc kubenswrapper[4834]: I0121 14:33:40.833443 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dtqf2" event={"ID":"d31034df-9ceb-49b0-9ad5-334dcaa28fa4","Type":"ContainerStarted","Data":"898e0f45337ac0c377777271c8beed050ed86779a397e698e7f7cbae9b66e550"} Jan 21 14:33:40 crc kubenswrapper[4834]: I0121 14:33:40.867981 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dtqf2" podStartSLOduration=145.867959332 podStartE2EDuration="2m25.867959332s" podCreationTimestamp="2026-01-21 14:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:33:40.859517661 +0000 UTC m=+166.833866716" watchObservedRunningTime="2026-01-21 14:33:40.867959332 +0000 UTC m=+166.842308377" Jan 21 14:33:41 crc kubenswrapper[4834]: I0121 14:33:41.408647 4834 patch_prober.go:28] interesting pod/console-f9d7485db-vzwpb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 21 14:33:41 crc kubenswrapper[4834]: I0121 14:33:41.408714 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vzwpb" podUID="92411afe-95fe-481a-ac22-4a411f4ff7f3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 21 14:33:41 crc kubenswrapper[4834]: I0121 14:33:41.535952 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:41 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:41 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:41 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:41 crc kubenswrapper[4834]: I0121 14:33:41.536022 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:41 crc kubenswrapper[4834]: I0121 14:33:41.843307 4834 patch_prober.go:28] interesting pod/downloads-7954f5f757-5hf5s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 21 14:33:41 crc kubenswrapper[4834]: I0121 14:33:41.843429 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5hf5s" podUID="51da62a2-0544-4231-8ab0-0b452ff8d2af" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 21 14:33:41 crc kubenswrapper[4834]: I0121 14:33:41.843425 4834 patch_prober.go:28] interesting pod/downloads-7954f5f757-5hf5s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 21 14:33:41 crc kubenswrapper[4834]: I0121 14:33:41.843543 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5hf5s" podUID="51da62a2-0544-4231-8ab0-0b452ff8d2af" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 21 14:33:42 crc kubenswrapper[4834]: I0121 14:33:42.540567 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:42 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:42 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:42 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:42 crc kubenswrapper[4834]: I0121 14:33:42.541064 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:43 crc kubenswrapper[4834]: I0121 14:33:43.535353 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:43 crc kubenswrapper[4834]: [-]has-synced failed: reason withheld Jan 21 14:33:43 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:43 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:43 crc kubenswrapper[4834]: I0121 14:33:43.535413 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:44 crc kubenswrapper[4834]: I0121 14:33:44.538336 4834 patch_prober.go:28] interesting pod/router-default-5444994796-4sz2q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 14:33:44 crc kubenswrapper[4834]: [+]has-synced ok Jan 21 14:33:44 crc kubenswrapper[4834]: [+]process-running ok Jan 21 14:33:44 crc kubenswrapper[4834]: healthz check failed Jan 21 14:33:44 crc kubenswrapper[4834]: I0121 14:33:44.538398 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sz2q" podUID="5733e12b-1157-455a-a5fb-06f8bfde751f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:33:45 crc kubenswrapper[4834]: I0121 14:33:45.537306 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:45 crc kubenswrapper[4834]: I0121 14:33:45.542356 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4sz2q" Jan 21 14:33:47 crc kubenswrapper[4834]: I0121 14:33:47.114508 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:33:47 crc kubenswrapper[4834]: I0121 14:33:47.114962 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.292081 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.412644 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.416948 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.569005 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.625080 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce6e21aa-88a0-473f-9677-8e591a782b49-kube-api-access\") pod \"ce6e21aa-88a0-473f-9677-8e591a782b49\" (UID: \"ce6e21aa-88a0-473f-9677-8e591a782b49\") " Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.625172 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce6e21aa-88a0-473f-9677-8e591a782b49-kubelet-dir\") pod \"ce6e21aa-88a0-473f-9677-8e591a782b49\" (UID: \"ce6e21aa-88a0-473f-9677-8e591a782b49\") " Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.625290 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce6e21aa-88a0-473f-9677-8e591a782b49-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ce6e21aa-88a0-473f-9677-8e591a782b49" (UID: "ce6e21aa-88a0-473f-9677-8e591a782b49"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.625512 4834 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce6e21aa-88a0-473f-9677-8e591a782b49-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.633666 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6e21aa-88a0-473f-9677-8e591a782b49-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ce6e21aa-88a0-473f-9677-8e591a782b49" (UID: "ce6e21aa-88a0-473f-9677-8e591a782b49"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.729674 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce6e21aa-88a0-473f-9677-8e591a782b49-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.862959 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5hf5s" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.914733 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ce6e21aa-88a0-473f-9677-8e591a782b49","Type":"ContainerDied","Data":"ed46476c7d3ce3aabf86e5eb95c64a33aa870e21c89cfc154649e452af5a11a7"} Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.914791 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed46476c7d3ce3aabf86e5eb95c64a33aa870e21c89cfc154649e452af5a11a7" Jan 21 14:33:51 crc kubenswrapper[4834]: I0121 14:33:51.914754 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 14:34:02 crc kubenswrapper[4834]: I0121 14:34:02.268604 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 14:34:02 crc kubenswrapper[4834]: I0121 14:34:02.992760 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7v775" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.715296 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 14:34:11 crc kubenswrapper[4834]: E0121 14:34:11.716392 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6e21aa-88a0-473f-9677-8e591a782b49" containerName="pruner" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.716417 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6e21aa-88a0-473f-9677-8e591a782b49" containerName="pruner" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.716601 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6e21aa-88a0-473f-9677-8e591a782b49" containerName="pruner" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.717203 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.722643 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.722670 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.727110 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.819461 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/518edc2b-2efc-47e5-a3f1-f82f3e745961-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"518edc2b-2efc-47e5-a3f1-f82f3e745961\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.819538 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/518edc2b-2efc-47e5-a3f1-f82f3e745961-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"518edc2b-2efc-47e5-a3f1-f82f3e745961\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.920698 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/518edc2b-2efc-47e5-a3f1-f82f3e745961-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"518edc2b-2efc-47e5-a3f1-f82f3e745961\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.920761 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/518edc2b-2efc-47e5-a3f1-f82f3e745961-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"518edc2b-2efc-47e5-a3f1-f82f3e745961\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.920951 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/518edc2b-2efc-47e5-a3f1-f82f3e745961-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"518edc2b-2efc-47e5-a3f1-f82f3e745961\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:11 crc kubenswrapper[4834]: I0121 14:34:11.940737 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/518edc2b-2efc-47e5-a3f1-f82f3e745961-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"518edc2b-2efc-47e5-a3f1-f82f3e745961\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:12 crc kubenswrapper[4834]: I0121 14:34:12.081186 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:14 crc kubenswrapper[4834]: E0121 14:34:14.024718 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 14:34:14 crc kubenswrapper[4834]: E0121 14:34:14.025723 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nk5bg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-769ws_openshift-marketplace(f245cd57-57f1-40a7-b0c5-edb85e06871d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:34:14 crc kubenswrapper[4834]: E0121 14:34:14.027143 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-769ws" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" Jan 21 14:34:15 crc kubenswrapper[4834]: E0121 14:34:15.276883 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 14:34:15 crc kubenswrapper[4834]: E0121 14:34:15.277088 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48bjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4hlmg_openshift-marketplace(8088483c-12ae-4825-a95a-42bec2973b76): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:34:15 crc kubenswrapper[4834]: E0121 14:34:15.278246 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4hlmg" podUID="8088483c-12ae-4825-a95a-42bec2973b76" Jan 21 14:34:16 crc kubenswrapper[4834]: E0121 14:34:16.146121 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-769ws" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" Jan 21 14:34:16 crc kubenswrapper[4834]: E0121 14:34:16.146138 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4hlmg" podUID="8088483c-12ae-4825-a95a-42bec2973b76" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.301580 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.302686 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.312701 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.381943 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.382235 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-var-lock\") pod \"installer-9-crc\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.382317 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff08c980-aca5-4de9-ad83-10c979bc28fb-kube-api-access\") pod \"installer-9-crc\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.483338 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-var-lock\") pod \"installer-9-crc\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.483479 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-var-lock\") pod \"installer-9-crc\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.483438 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff08c980-aca5-4de9-ad83-10c979bc28fb-kube-api-access\") pod \"installer-9-crc\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.483609 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.483700 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.505371 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff08c980-aca5-4de9-ad83-10c979bc28fb-kube-api-access\") pod \"installer-9-crc\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:16 crc kubenswrapper[4834]: E0121 14:34:16.543321 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 14:34:16 crc kubenswrapper[4834]: E0121 14:34:16.543800 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzx66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ltsnd_openshift-marketplace(db774f6a-d370-4725-a77d-35da37c572d1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:34:16 crc kubenswrapper[4834]: E0121 14:34:16.545013 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ltsnd" podUID="db774f6a-d370-4725-a77d-35da37c572d1" Jan 21 14:34:16 crc kubenswrapper[4834]: I0121 14:34:16.637069 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:34:17 crc kubenswrapper[4834]: I0121 14:34:17.115532 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:34:17 crc kubenswrapper[4834]: I0121 14:34:17.115594 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:34:17 crc kubenswrapper[4834]: I0121 14:34:17.115644 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:34:17 crc kubenswrapper[4834]: I0121 14:34:17.116797 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:34:17 crc kubenswrapper[4834]: I0121 14:34:17.116911 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870" gracePeriod=600 Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.751615 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ltsnd" podUID="db774f6a-d370-4725-a77d-35da37c572d1" Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.815827 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.816046 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4x4sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dsddw_openshift-marketplace(67c225e2-f31a-4572-814e-804233b6c1fd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.819024 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dsddw" podUID="67c225e2-f31a-4572-814e-804233b6c1fd" Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.841423 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.841640 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwq7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-n466t_openshift-marketplace(8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.842848 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-n466t" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.852496 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.852609 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nbww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hzgrw_openshift-marketplace(7708819f-d97a-47a7-b9a6-51fe7a7f503f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:34:17 crc kubenswrapper[4834]: E0121 14:34:17.853878 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hzgrw" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" Jan 21 14:34:18 crc kubenswrapper[4834]: I0121 14:34:18.155821 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870" exitCode=0 Jan 21 14:34:18 crc kubenswrapper[4834]: I0121 14:34:18.155915 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870"} Jan 21 14:34:20 crc kubenswrapper[4834]: E0121 14:34:20.959812 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dsddw" podUID="67c225e2-f31a-4572-814e-804233b6c1fd" Jan 21 14:34:20 crc kubenswrapper[4834]: E0121 14:34:20.959838 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-n466t" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" Jan 21 14:34:20 crc kubenswrapper[4834]: E0121 14:34:20.959955 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hzgrw" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" Jan 21 14:34:20 crc kubenswrapper[4834]: E0121 14:34:20.988558 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 14:34:20 crc kubenswrapper[4834]: E0121 14:34:20.988780 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddcbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bglrk_openshift-marketplace(eddf9593-c223-4b82-9774-6059413ae2d0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:34:20 crc kubenswrapper[4834]: E0121 14:34:20.989093 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 14:34:20 crc kubenswrapper[4834]: E0121 14:34:20.989338 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgzbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8s2bs_openshift-marketplace(bc6967c6-5420-404e-88dd-95664165decf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:34:20 crc kubenswrapper[4834]: E0121 14:34:20.990020 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bglrk" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" Jan 21 14:34:20 crc kubenswrapper[4834]: E0121 14:34:20.991515 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8s2bs" podUID="bc6967c6-5420-404e-88dd-95664165decf" Jan 21 14:34:21 crc kubenswrapper[4834]: I0121 14:34:21.173165 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"9309683cbdb585cabd230b02b06a5fdc4f3c6d79bb872144234f51ca2d24f480"} Jan 21 14:34:21 crc kubenswrapper[4834]: E0121 14:34:21.174493 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bglrk" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" Jan 21 14:34:21 crc kubenswrapper[4834]: E0121 14:34:21.174490 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8s2bs" podUID="bc6967c6-5420-404e-88dd-95664165decf" Jan 21 14:34:21 crc kubenswrapper[4834]: I0121 14:34:21.402561 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 14:34:21 crc kubenswrapper[4834]: I0121 14:34:21.425178 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 14:34:21 crc kubenswrapper[4834]: W0121 14:34:21.431726 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podff08c980_aca5_4de9_ad83_10c979bc28fb.slice/crio-e21420faa37a60c79082c1628bead7499a355920ee52ba8cc923491336803b32 WatchSource:0}: Error finding container e21420faa37a60c79082c1628bead7499a355920ee52ba8cc923491336803b32: Status 404 returned error can't find the container with id e21420faa37a60c79082c1628bead7499a355920ee52ba8cc923491336803b32 Jan 21 14:34:22 crc kubenswrapper[4834]: I0121 14:34:22.180106 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ff08c980-aca5-4de9-ad83-10c979bc28fb","Type":"ContainerStarted","Data":"08843f096fc44b62dc9f3c007d9a095d0890b97a03e7d35a377fd82247b4e89f"} Jan 21 14:34:22 crc kubenswrapper[4834]: I0121 14:34:22.180828 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ff08c980-aca5-4de9-ad83-10c979bc28fb","Type":"ContainerStarted","Data":"e21420faa37a60c79082c1628bead7499a355920ee52ba8cc923491336803b32"} Jan 21 14:34:22 crc kubenswrapper[4834]: I0121 14:34:22.182700 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"518edc2b-2efc-47e5-a3f1-f82f3e745961","Type":"ContainerStarted","Data":"ce52e2182244ddfcd4cdcc008ab5f2776be756781948559b9b888c85bcc1cb5d"} Jan 21 14:34:22 crc kubenswrapper[4834]: I0121 14:34:22.182765 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"518edc2b-2efc-47e5-a3f1-f82f3e745961","Type":"ContainerStarted","Data":"7b4bb64b6391ba43bfa8630546295fc44c0c37628fdc23f9c5a526018b8dd221"} Jan 21 14:34:22 crc kubenswrapper[4834]: I0121 14:34:22.195330 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.195314976 podStartE2EDuration="6.195314976s" podCreationTimestamp="2026-01-21 14:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:34:22.193403105 +0000 UTC m=+208.167752170" watchObservedRunningTime="2026-01-21 14:34:22.195314976 +0000 UTC m=+208.169664021" Jan 21 14:34:22 crc kubenswrapper[4834]: I0121 14:34:22.210114 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=11.21009146 podStartE2EDuration="11.21009146s" podCreationTimestamp="2026-01-21 14:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:34:22.207305551 +0000 UTC m=+208.181654596" watchObservedRunningTime="2026-01-21 14:34:22.21009146 +0000 UTC m=+208.184440515" Jan 21 14:34:23 crc kubenswrapper[4834]: I0121 14:34:23.189447 4834 generic.go:334] "Generic (PLEG): container finished" podID="518edc2b-2efc-47e5-a3f1-f82f3e745961" containerID="ce52e2182244ddfcd4cdcc008ab5f2776be756781948559b9b888c85bcc1cb5d" exitCode=0 Jan 21 14:34:23 crc kubenswrapper[4834]: I0121 14:34:23.189529 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"518edc2b-2efc-47e5-a3f1-f82f3e745961","Type":"ContainerDied","Data":"ce52e2182244ddfcd4cdcc008ab5f2776be756781948559b9b888c85bcc1cb5d"} Jan 21 14:34:24 crc kubenswrapper[4834]: I0121 14:34:24.434350 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:24 crc kubenswrapper[4834]: I0121 14:34:24.585791 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/518edc2b-2efc-47e5-a3f1-f82f3e745961-kube-api-access\") pod \"518edc2b-2efc-47e5-a3f1-f82f3e745961\" (UID: \"518edc2b-2efc-47e5-a3f1-f82f3e745961\") " Jan 21 14:34:24 crc kubenswrapper[4834]: I0121 14:34:24.585901 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/518edc2b-2efc-47e5-a3f1-f82f3e745961-kubelet-dir\") pod \"518edc2b-2efc-47e5-a3f1-f82f3e745961\" (UID: \"518edc2b-2efc-47e5-a3f1-f82f3e745961\") " Jan 21 14:34:24 crc kubenswrapper[4834]: I0121 14:34:24.586043 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/518edc2b-2efc-47e5-a3f1-f82f3e745961-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "518edc2b-2efc-47e5-a3f1-f82f3e745961" (UID: "518edc2b-2efc-47e5-a3f1-f82f3e745961"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:34:24 crc kubenswrapper[4834]: I0121 14:34:24.586297 4834 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/518edc2b-2efc-47e5-a3f1-f82f3e745961-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:24 crc kubenswrapper[4834]: I0121 14:34:24.592201 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518edc2b-2efc-47e5-a3f1-f82f3e745961-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "518edc2b-2efc-47e5-a3f1-f82f3e745961" (UID: "518edc2b-2efc-47e5-a3f1-f82f3e745961"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:24 crc kubenswrapper[4834]: I0121 14:34:24.687505 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/518edc2b-2efc-47e5-a3f1-f82f3e745961-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:25 crc kubenswrapper[4834]: I0121 14:34:25.201850 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"518edc2b-2efc-47e5-a3f1-f82f3e745961","Type":"ContainerDied","Data":"7b4bb64b6391ba43bfa8630546295fc44c0c37628fdc23f9c5a526018b8dd221"} Jan 21 14:34:25 crc kubenswrapper[4834]: I0121 14:34:25.201897 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b4bb64b6391ba43bfa8630546295fc44c0c37628fdc23f9c5a526018b8dd221" Jan 21 14:34:25 crc kubenswrapper[4834]: I0121 14:34:25.201918 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 14:34:33 crc kubenswrapper[4834]: I0121 14:34:33.244643 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hlmg" event={"ID":"8088483c-12ae-4825-a95a-42bec2973b76","Type":"ContainerStarted","Data":"282cedaad58d3d34d7e5bb85a8502333f421d84fe7a3751e1730279f60e02786"} Jan 21 14:34:34 crc kubenswrapper[4834]: I0121 14:34:34.253466 4834 generic.go:334] "Generic (PLEG): container finished" podID="8088483c-12ae-4825-a95a-42bec2973b76" containerID="282cedaad58d3d34d7e5bb85a8502333f421d84fe7a3751e1730279f60e02786" exitCode=0 Jan 21 14:34:34 crc kubenswrapper[4834]: I0121 14:34:34.253525 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hlmg" event={"ID":"8088483c-12ae-4825-a95a-42bec2973b76","Type":"ContainerDied","Data":"282cedaad58d3d34d7e5bb85a8502333f421d84fe7a3751e1730279f60e02786"} Jan 21 14:34:34 crc kubenswrapper[4834]: I0121 14:34:34.836720 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mcc7f"] Jan 21 14:34:34 crc kubenswrapper[4834]: E0121 14:34:34.837374 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518edc2b-2efc-47e5-a3f1-f82f3e745961" containerName="pruner" Jan 21 14:34:34 crc kubenswrapper[4834]: I0121 14:34:34.837397 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="518edc2b-2efc-47e5-a3f1-f82f3e745961" containerName="pruner" Jan 21 14:34:34 crc kubenswrapper[4834]: I0121 14:34:34.837563 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="518edc2b-2efc-47e5-a3f1-f82f3e745961" containerName="pruner" Jan 21 14:34:34 crc kubenswrapper[4834]: I0121 14:34:34.838101 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:34 crc kubenswrapper[4834]: I0121 14:34:34.857642 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mcc7f"] Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.017530 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ba60424-3923-4599-917d-67f90083757b-registry-tls\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.017589 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ba60424-3923-4599-917d-67f90083757b-trusted-ca\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.017616 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8bwm\" (UniqueName: \"kubernetes.io/projected/9ba60424-3923-4599-917d-67f90083757b-kube-api-access-x8bwm\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.017640 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ba60424-3923-4599-917d-67f90083757b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.017674 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ba60424-3923-4599-917d-67f90083757b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.017777 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.017830 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ba60424-3923-4599-917d-67f90083757b-registry-certificates\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.017852 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ba60424-3923-4599-917d-67f90083757b-bound-sa-token\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.056030 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.119397 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ba60424-3923-4599-917d-67f90083757b-trusted-ca\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.119463 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8bwm\" (UniqueName: \"kubernetes.io/projected/9ba60424-3923-4599-917d-67f90083757b-kube-api-access-x8bwm\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.119497 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ba60424-3923-4599-917d-67f90083757b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.119542 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ba60424-3923-4599-917d-67f90083757b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.119594 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ba60424-3923-4599-917d-67f90083757b-bound-sa-token\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.119617 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ba60424-3923-4599-917d-67f90083757b-registry-certificates\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.120009 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ba60424-3923-4599-917d-67f90083757b-registry-tls\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.120136 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9ba60424-3923-4599-917d-67f90083757b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.120799 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ba60424-3923-4599-917d-67f90083757b-trusted-ca\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.121215 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9ba60424-3923-4599-917d-67f90083757b-registry-certificates\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.126121 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9ba60424-3923-4599-917d-67f90083757b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.126863 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9ba60424-3923-4599-917d-67f90083757b-registry-tls\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.141110 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9ba60424-3923-4599-917d-67f90083757b-bound-sa-token\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.146514 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8bwm\" (UniqueName: \"kubernetes.io/projected/9ba60424-3923-4599-917d-67f90083757b-kube-api-access-x8bwm\") pod \"image-registry-66df7c8f76-mcc7f\" (UID: \"9ba60424-3923-4599-917d-67f90083757b\") " pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.152460 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.263191 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hlmg" event={"ID":"8088483c-12ae-4825-a95a-42bec2973b76","Type":"ContainerStarted","Data":"64c2d3a47e8a2ab9b1ff87b22cb103c5191e30f35f3b5acaa82eb89bbcc1f017"} Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.270443 4834 generic.go:334] "Generic (PLEG): container finished" podID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerID="6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30" exitCode=0 Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.270501 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-769ws" event={"ID":"f245cd57-57f1-40a7-b0c5-edb85e06871d","Type":"ContainerDied","Data":"6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30"} Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.272663 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bglrk" event={"ID":"eddf9593-c223-4b82-9774-6059413ae2d0","Type":"ContainerStarted","Data":"f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f"} Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.301672 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4hlmg" podStartSLOduration=3.815874792 podStartE2EDuration="1m6.301646874s" podCreationTimestamp="2026-01-21 14:33:29 +0000 UTC" firstStartedPulling="2026-01-21 14:33:32.493531145 +0000 UTC m=+158.467880190" lastFinishedPulling="2026-01-21 14:34:34.979303217 +0000 UTC m=+220.953652272" observedRunningTime="2026-01-21 14:34:35.2993705 +0000 UTC m=+221.273719545" watchObservedRunningTime="2026-01-21 14:34:35.301646874 +0000 UTC m=+221.275995919" Jan 21 14:34:35 crc kubenswrapper[4834]: I0121 14:34:35.643887 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mcc7f"] Jan 21 14:34:36 crc kubenswrapper[4834]: I0121 14:34:36.290077 4834 generic.go:334] "Generic (PLEG): container finished" podID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerID="275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1" exitCode=0 Jan 21 14:34:36 crc kubenswrapper[4834]: I0121 14:34:36.290209 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrw" event={"ID":"7708819f-d97a-47a7-b9a6-51fe7a7f503f","Type":"ContainerDied","Data":"275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1"} Jan 21 14:34:36 crc kubenswrapper[4834]: I0121 14:34:36.292467 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" event={"ID":"9ba60424-3923-4599-917d-67f90083757b","Type":"ContainerStarted","Data":"6ff2947d8620e8685edb5d60ebbb705e4ab84a96311c0ec360025fd531c01d9b"} Jan 21 14:34:36 crc kubenswrapper[4834]: I0121 14:34:36.294615 4834 generic.go:334] "Generic (PLEG): container finished" podID="eddf9593-c223-4b82-9774-6059413ae2d0" containerID="f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f" exitCode=0 Jan 21 14:34:36 crc kubenswrapper[4834]: I0121 14:34:36.294683 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bglrk" event={"ID":"eddf9593-c223-4b82-9774-6059413ae2d0","Type":"ContainerDied","Data":"f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f"} Jan 21 14:34:36 crc kubenswrapper[4834]: I0121 14:34:36.298688 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n466t" event={"ID":"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb","Type":"ContainerStarted","Data":"9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b"} Jan 21 14:34:36 crc kubenswrapper[4834]: I0121 14:34:36.311822 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltsnd" event={"ID":"db774f6a-d370-4725-a77d-35da37c572d1","Type":"ContainerStarted","Data":"86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108"} Jan 21 14:34:37 crc kubenswrapper[4834]: I0121 14:34:37.323036 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" event={"ID":"9ba60424-3923-4599-917d-67f90083757b","Type":"ContainerStarted","Data":"b88c546c6c5b4423e24d9b149c185964c40d8f403d8e2d400ee841b6a8725dac"} Jan 21 14:34:37 crc kubenswrapper[4834]: I0121 14:34:37.324170 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:37 crc kubenswrapper[4834]: I0121 14:34:37.327306 4834 generic.go:334] "Generic (PLEG): container finished" podID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerID="9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b" exitCode=0 Jan 21 14:34:37 crc kubenswrapper[4834]: I0121 14:34:37.327355 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n466t" event={"ID":"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb","Type":"ContainerDied","Data":"9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b"} Jan 21 14:34:37 crc kubenswrapper[4834]: I0121 14:34:37.332485 4834 generic.go:334] "Generic (PLEG): container finished" podID="db774f6a-d370-4725-a77d-35da37c572d1" containerID="86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108" exitCode=0 Jan 21 14:34:37 crc kubenswrapper[4834]: I0121 14:34:37.332529 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltsnd" event={"ID":"db774f6a-d370-4725-a77d-35da37c572d1","Type":"ContainerDied","Data":"86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108"} Jan 21 14:34:37 crc kubenswrapper[4834]: I0121 14:34:37.343161 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" podStartSLOduration=3.343135299 podStartE2EDuration="3.343135299s" podCreationTimestamp="2026-01-21 14:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:34:37.342580711 +0000 UTC m=+223.316929766" watchObservedRunningTime="2026-01-21 14:34:37.343135299 +0000 UTC m=+223.317484344" Jan 21 14:34:40 crc kubenswrapper[4834]: I0121 14:34:40.943956 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:34:40 crc kubenswrapper[4834]: I0121 14:34:40.944624 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:34:41 crc kubenswrapper[4834]: I0121 14:34:41.237353 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:34:41 crc kubenswrapper[4834]: I0121 14:34:41.395160 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:34:43 crc kubenswrapper[4834]: I0121 14:34:43.289590 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hlmg"] Jan 21 14:34:43 crc kubenswrapper[4834]: I0121 14:34:43.362483 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4hlmg" podUID="8088483c-12ae-4825-a95a-42bec2973b76" containerName="registry-server" containerID="cri-o://64c2d3a47e8a2ab9b1ff87b22cb103c5191e30f35f3b5acaa82eb89bbcc1f017" gracePeriod=2 Jan 21 14:34:44 crc kubenswrapper[4834]: I0121 14:34:44.971495 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zjflv"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.377100 4834 generic.go:334] "Generic (PLEG): container finished" podID="8088483c-12ae-4825-a95a-42bec2973b76" containerID="64c2d3a47e8a2ab9b1ff87b22cb103c5191e30f35f3b5acaa82eb89bbcc1f017" exitCode=0 Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.377157 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hlmg" event={"ID":"8088483c-12ae-4825-a95a-42bec2973b76","Type":"ContainerDied","Data":"64c2d3a47e8a2ab9b1ff87b22cb103c5191e30f35f3b5acaa82eb89bbcc1f017"} Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.446004 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzgrw"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.452828 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ltsnd"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.465299 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-769ws"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.470103 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v9x5s"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.470285 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" podUID="da7838db-42fa-496d-bdec-712d5fcc46c6" containerName="marketplace-operator" containerID="cri-o://9304d7afc79f0e1878eeb5d03d2d81740852dbd4b9bb0f68647536a0c791773c" gracePeriod=30 Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.486123 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsddw"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.492758 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lt7k4"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.493638 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.496229 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n466t"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.500552 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s2bs"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.510139 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lt7k4"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.513616 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bglrk"] Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.567998 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qx5p\" (UniqueName: \"kubernetes.io/projected/494aef4f-fcf4-422a-be16-b39449045941-kube-api-access-2qx5p\") pod \"marketplace-operator-79b997595-lt7k4\" (UID: \"494aef4f-fcf4-422a-be16-b39449045941\") " pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.568148 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/494aef4f-fcf4-422a-be16-b39449045941-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lt7k4\" (UID: \"494aef4f-fcf4-422a-be16-b39449045941\") " pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.568192 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/494aef4f-fcf4-422a-be16-b39449045941-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lt7k4\" (UID: \"494aef4f-fcf4-422a-be16-b39449045941\") " pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.669049 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/494aef4f-fcf4-422a-be16-b39449045941-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lt7k4\" (UID: \"494aef4f-fcf4-422a-be16-b39449045941\") " pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.669126 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/494aef4f-fcf4-422a-be16-b39449045941-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lt7k4\" (UID: \"494aef4f-fcf4-422a-be16-b39449045941\") " pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.669191 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qx5p\" (UniqueName: \"kubernetes.io/projected/494aef4f-fcf4-422a-be16-b39449045941-kube-api-access-2qx5p\") pod \"marketplace-operator-79b997595-lt7k4\" (UID: \"494aef4f-fcf4-422a-be16-b39449045941\") " pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.671155 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/494aef4f-fcf4-422a-be16-b39449045941-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lt7k4\" (UID: \"494aef4f-fcf4-422a-be16-b39449045941\") " pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.677909 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/494aef4f-fcf4-422a-be16-b39449045941-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lt7k4\" (UID: \"494aef4f-fcf4-422a-be16-b39449045941\") " pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.688685 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qx5p\" (UniqueName: \"kubernetes.io/projected/494aef4f-fcf4-422a-be16-b39449045941-kube-api-access-2qx5p\") pod \"marketplace-operator-79b997595-lt7k4\" (UID: \"494aef4f-fcf4-422a-be16-b39449045941\") " pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:45 crc kubenswrapper[4834]: I0121 14:34:45.817132 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.276005 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.410619 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-catalog-content\") pod \"8088483c-12ae-4825-a95a-42bec2973b76\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.410690 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hlmg" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.410684 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hlmg" event={"ID":"8088483c-12ae-4825-a95a-42bec2973b76","Type":"ContainerDied","Data":"89dc7c5a0d8f488b99fe0e1f467ef32545a09b6b6719692d05ef23dd4621563f"} Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.410808 4834 scope.go:117] "RemoveContainer" containerID="64c2d3a47e8a2ab9b1ff87b22cb103c5191e30f35f3b5acaa82eb89bbcc1f017" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.412515 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-utilities\") pod \"8088483c-12ae-4825-a95a-42bec2973b76\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.412565 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48bjt\" (UniqueName: \"kubernetes.io/projected/8088483c-12ae-4825-a95a-42bec2973b76-kube-api-access-48bjt\") pod \"8088483c-12ae-4825-a95a-42bec2973b76\" (UID: \"8088483c-12ae-4825-a95a-42bec2973b76\") " Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.415078 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-utilities" (OuterVolumeSpecName: "utilities") pod "8088483c-12ae-4825-a95a-42bec2973b76" (UID: "8088483c-12ae-4825-a95a-42bec2973b76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.418136 4834 generic.go:334] "Generic (PLEG): container finished" podID="da7838db-42fa-496d-bdec-712d5fcc46c6" containerID="9304d7afc79f0e1878eeb5d03d2d81740852dbd4b9bb0f68647536a0c791773c" exitCode=0 Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.418177 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" event={"ID":"da7838db-42fa-496d-bdec-712d5fcc46c6","Type":"ContainerDied","Data":"9304d7afc79f0e1878eeb5d03d2d81740852dbd4b9bb0f68647536a0c791773c"} Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.421462 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8088483c-12ae-4825-a95a-42bec2973b76-kube-api-access-48bjt" (OuterVolumeSpecName: "kube-api-access-48bjt") pod "8088483c-12ae-4825-a95a-42bec2973b76" (UID: "8088483c-12ae-4825-a95a-42bec2973b76"). InnerVolumeSpecName "kube-api-access-48bjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.464943 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8088483c-12ae-4825-a95a-42bec2973b76" (UID: "8088483c-12ae-4825-a95a-42bec2973b76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.516522 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.516560 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48bjt\" (UniqueName: \"kubernetes.io/projected/8088483c-12ae-4825-a95a-42bec2973b76-kube-api-access-48bjt\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.516571 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8088483c-12ae-4825-a95a-42bec2973b76-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.746692 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hlmg"] Jan 21 14:34:48 crc kubenswrapper[4834]: I0121 14:34:48.749159 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4hlmg"] Jan 21 14:34:49 crc kubenswrapper[4834]: I0121 14:34:49.464121 4834 scope.go:117] "RemoveContainer" containerID="282cedaad58d3d34d7e5bb85a8502333f421d84fe7a3751e1730279f60e02786" Jan 21 14:34:49 crc kubenswrapper[4834]: I0121 14:34:49.941456 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:34:49 crc kubenswrapper[4834]: I0121 14:34:49.980838 4834 scope.go:117] "RemoveContainer" containerID="250288771a7af048a1fc19e5cf82d253a79f2f21809311e81bca593e0b6447d2" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.037885 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-trusted-ca\") pod \"da7838db-42fa-496d-bdec-712d5fcc46c6\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.037983 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-operator-metrics\") pod \"da7838db-42fa-496d-bdec-712d5fcc46c6\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.038090 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhdfv\" (UniqueName: \"kubernetes.io/projected/da7838db-42fa-496d-bdec-712d5fcc46c6-kube-api-access-nhdfv\") pod \"da7838db-42fa-496d-bdec-712d5fcc46c6\" (UID: \"da7838db-42fa-496d-bdec-712d5fcc46c6\") " Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.039158 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "da7838db-42fa-496d-bdec-712d5fcc46c6" (UID: "da7838db-42fa-496d-bdec-712d5fcc46c6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.043334 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "da7838db-42fa-496d-bdec-712d5fcc46c6" (UID: "da7838db-42fa-496d-bdec-712d5fcc46c6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.044226 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7838db-42fa-496d-bdec-712d5fcc46c6-kube-api-access-nhdfv" (OuterVolumeSpecName: "kube-api-access-nhdfv") pod "da7838db-42fa-496d-bdec-712d5fcc46c6" (UID: "da7838db-42fa-496d-bdec-712d5fcc46c6"). InnerVolumeSpecName "kube-api-access-nhdfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.140027 4834 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.140507 4834 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da7838db-42fa-496d-bdec-712d5fcc46c6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.140531 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhdfv\" (UniqueName: \"kubernetes.io/projected/da7838db-42fa-496d-bdec-712d5fcc46c6-kube-api-access-nhdfv\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.333983 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8088483c-12ae-4825-a95a-42bec2973b76" path="/var/lib/kubelet/pods/8088483c-12ae-4825-a95a-42bec2973b76/volumes" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.443674 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lt7k4"] Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.446191 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-769ws" event={"ID":"f245cd57-57f1-40a7-b0c5-edb85e06871d","Type":"ContainerStarted","Data":"c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0"} Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.446377 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-769ws" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerName="registry-server" containerID="cri-o://c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0" gracePeriod=30 Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.448538 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsddw" event={"ID":"67c225e2-f31a-4572-814e-804233b6c1fd","Type":"ContainerStarted","Data":"97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c"} Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.448685 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dsddw" podUID="67c225e2-f31a-4572-814e-804233b6c1fd" containerName="extract-content" containerID="cri-o://97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c" gracePeriod=30 Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.451025 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" event={"ID":"da7838db-42fa-496d-bdec-712d5fcc46c6","Type":"ContainerDied","Data":"a81f439c4c5265af4317c4bb0ffbbda06c51577dce4b9a2389ceb105701585bf"} Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.451065 4834 scope.go:117] "RemoveContainer" containerID="9304d7afc79f0e1878eeb5d03d2d81740852dbd4b9bb0f68647536a0c791773c" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.451140 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v9x5s" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.456224 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bglrk" event={"ID":"eddf9593-c223-4b82-9774-6059413ae2d0","Type":"ContainerStarted","Data":"e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab"} Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.456410 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bglrk" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" containerName="registry-server" containerID="cri-o://e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab" gracePeriod=30 Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.459778 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n466t" event={"ID":"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb","Type":"ContainerStarted","Data":"313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631"} Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.459941 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n466t" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerName="registry-server" containerID="cri-o://313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631" gracePeriod=30 Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.462693 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2bs" event={"ID":"bc6967c6-5420-404e-88dd-95664165decf","Type":"ContainerStarted","Data":"29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7"} Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.462888 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8s2bs" podUID="bc6967c6-5420-404e-88dd-95664165decf" containerName="extract-content" containerID="cri-o://29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7" gracePeriod=30 Jan 21 14:34:50 crc kubenswrapper[4834]: W0121 14:34:50.467431 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod494aef4f_fcf4_422a_be16_b39449045941.slice/crio-d998705bacc04c9af9595d951b5946c95b560720d4be62b2d0ff65384b8e5548 WatchSource:0}: Error finding container d998705bacc04c9af9595d951b5946c95b560720d4be62b2d0ff65384b8e5548: Status 404 returned error can't find the container with id d998705bacc04c9af9595d951b5946c95b560720d4be62b2d0ff65384b8e5548 Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.468614 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltsnd" event={"ID":"db774f6a-d370-4725-a77d-35da37c572d1","Type":"ContainerStarted","Data":"dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62"} Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.468824 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ltsnd" podUID="db774f6a-d370-4725-a77d-35da37c572d1" containerName="registry-server" containerID="cri-o://dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62" gracePeriod=30 Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.474501 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrw" event={"ID":"7708819f-d97a-47a7-b9a6-51fe7a7f503f","Type":"ContainerStarted","Data":"bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40"} Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.474718 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzgrw" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerName="registry-server" containerID="cri-o://bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40" gracePeriod=30 Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.490090 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-769ws" podStartSLOduration=4.536550311 podStartE2EDuration="1m21.490063988s" podCreationTimestamp="2026-01-21 14:33:29 +0000 UTC" firstStartedPulling="2026-01-21 14:33:32.511065667 +0000 UTC m=+158.485414712" lastFinishedPulling="2026-01-21 14:34:49.464579334 +0000 UTC m=+235.438928389" observedRunningTime="2026-01-21 14:34:50.478408764 +0000 UTC m=+236.452757809" watchObservedRunningTime="2026-01-21 14:34:50.490063988 +0000 UTC m=+236.464413043" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.492823 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v9x5s"] Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.496157 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v9x5s"] Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.499657 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.570573 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzgrw" podStartSLOduration=3.065044752 podStartE2EDuration="1m20.570544228s" podCreationTimestamp="2026-01-21 14:33:30 +0000 UTC" firstStartedPulling="2026-01-21 14:33:32.490086705 +0000 UTC m=+158.464435750" lastFinishedPulling="2026-01-21 14:34:49.995586191 +0000 UTC m=+235.969935226" observedRunningTime="2026-01-21 14:34:50.537258621 +0000 UTC m=+236.511607666" watchObservedRunningTime="2026-01-21 14:34:50.570544228 +0000 UTC m=+236.544893273" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.571263 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bglrk" podStartSLOduration=3.16718892 podStartE2EDuration="1m17.571252791s" podCreationTimestamp="2026-01-21 14:33:33 +0000 UTC" firstStartedPulling="2026-01-21 14:33:35.611352006 +0000 UTC m=+161.585701051" lastFinishedPulling="2026-01-21 14:34:50.015415857 +0000 UTC m=+235.989764922" observedRunningTime="2026-01-21 14:34:50.554766383 +0000 UTC m=+236.529115448" watchObservedRunningTime="2026-01-21 14:34:50.571252791 +0000 UTC m=+236.545601846" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.574627 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ltsnd" podStartSLOduration=4.072253844 podStartE2EDuration="1m21.574606029s" podCreationTimestamp="2026-01-21 14:33:29 +0000 UTC" firstStartedPulling="2026-01-21 14:33:32.478831994 +0000 UTC m=+158.453181049" lastFinishedPulling="2026-01-21 14:34:49.981184189 +0000 UTC m=+235.955533234" observedRunningTime="2026-01-21 14:34:50.572299245 +0000 UTC m=+236.546648290" watchObservedRunningTime="2026-01-21 14:34:50.574606029 +0000 UTC m=+236.548955074" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.613587 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n466t" podStartSLOduration=3.210309247 podStartE2EDuration="1m19.613567318s" podCreationTimestamp="2026-01-21 14:33:31 +0000 UTC" firstStartedPulling="2026-01-21 14:33:33.54802986 +0000 UTC m=+159.522378895" lastFinishedPulling="2026-01-21 14:34:49.951287881 +0000 UTC m=+235.925636966" observedRunningTime="2026-01-21 14:34:50.608543597 +0000 UTC m=+236.582892662" watchObservedRunningTime="2026-01-21 14:34:50.613567318 +0000 UTC m=+236.587916363" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.925393 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ltsnd_db774f6a-d370-4725-a77d-35da37c572d1/registry-server/0.log" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.927518 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:34:50 crc kubenswrapper[4834]: I0121 14:34:50.952122 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-769ws" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.027503 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-769ws_f245cd57-57f1-40a7-b0c5-edb85e06871d/registry-server/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.028311 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-769ws" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.033497 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hzgrw_7708819f-d97a-47a7-b9a6-51fe7a7f503f/registry-server/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.034834 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.040209 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dsddw_67c225e2-f31a-4572-814e-804233b6c1fd/extract-content/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.041549 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.054481 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n466t_8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb/registry-server/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.058681 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.059632 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8s2bs_bc6967c6-5420-404e-88dd-95664165decf/extract-content/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.060030 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.062010 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bglrk_eddf9593-c223-4b82-9774-6059413ae2d0/registry-server/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.062659 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.064547 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-utilities\") pod \"db774f6a-d370-4725-a77d-35da37c572d1\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.064691 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-catalog-content\") pod \"db774f6a-d370-4725-a77d-35da37c572d1\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.064762 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzx66\" (UniqueName: \"kubernetes.io/projected/db774f6a-d370-4725-a77d-35da37c572d1-kube-api-access-jzx66\") pod \"db774f6a-d370-4725-a77d-35da37c572d1\" (UID: \"db774f6a-d370-4725-a77d-35da37c572d1\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.065850 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-utilities" (OuterVolumeSpecName: "utilities") pod "db774f6a-d370-4725-a77d-35da37c572d1" (UID: "db774f6a-d370-4725-a77d-35da37c572d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.079504 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db774f6a-d370-4725-a77d-35da37c572d1-kube-api-access-jzx66" (OuterVolumeSpecName: "kube-api-access-jzx66") pod "db774f6a-d370-4725-a77d-35da37c572d1" (UID: "db774f6a-d370-4725-a77d-35da37c572d1"). InnerVolumeSpecName "kube-api-access-jzx66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.126755 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db774f6a-d370-4725-a77d-35da37c572d1" (UID: "db774f6a-d370-4725-a77d-35da37c572d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.165877 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x4sh\" (UniqueName: \"kubernetes.io/projected/67c225e2-f31a-4572-814e-804233b6c1fd-kube-api-access-4x4sh\") pod \"67c225e2-f31a-4572-814e-804233b6c1fd\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.166229 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddcbn\" (UniqueName: \"kubernetes.io/projected/eddf9593-c223-4b82-9774-6059413ae2d0-kube-api-access-ddcbn\") pod \"eddf9593-c223-4b82-9774-6059413ae2d0\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.166380 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nbww\" (UniqueName: \"kubernetes.io/projected/7708819f-d97a-47a7-b9a6-51fe7a7f503f-kube-api-access-6nbww\") pod \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.166464 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-catalog-content\") pod \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.166559 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-utilities\") pod \"bc6967c6-5420-404e-88dd-95664165decf\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.166646 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-catalog-content\") pod \"f245cd57-57f1-40a7-b0c5-edb85e06871d\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.166784 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-utilities\") pod \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.166887 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-utilities\") pod \"f245cd57-57f1-40a7-b0c5-edb85e06871d\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.166976 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-utilities\") pod \"eddf9593-c223-4b82-9774-6059413ae2d0\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.167059 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-catalog-content\") pod \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\" (UID: \"7708819f-d97a-47a7-b9a6-51fe7a7f503f\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.167137 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-utilities\") pod \"67c225e2-f31a-4572-814e-804233b6c1fd\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.167204 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgzbc\" (UniqueName: \"kubernetes.io/projected/bc6967c6-5420-404e-88dd-95664165decf-kube-api-access-lgzbc\") pod \"bc6967c6-5420-404e-88dd-95664165decf\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.167291 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwq7g\" (UniqueName: \"kubernetes.io/projected/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-kube-api-access-mwq7g\") pod \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.167378 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk5bg\" (UniqueName: \"kubernetes.io/projected/f245cd57-57f1-40a7-b0c5-edb85e06871d-kube-api-access-nk5bg\") pod \"f245cd57-57f1-40a7-b0c5-edb85e06871d\" (UID: \"f245cd57-57f1-40a7-b0c5-edb85e06871d\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.167461 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-utilities\") pod \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\" (UID: \"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.167535 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-catalog-content\") pod \"bc6967c6-5420-404e-88dd-95664165decf\" (UID: \"bc6967c6-5420-404e-88dd-95664165decf\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.167627 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-catalog-content\") pod \"67c225e2-f31a-4572-814e-804233b6c1fd\" (UID: \"67c225e2-f31a-4572-814e-804233b6c1fd\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.167863 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-catalog-content\") pod \"eddf9593-c223-4b82-9774-6059413ae2d0\" (UID: \"eddf9593-c223-4b82-9774-6059413ae2d0\") " Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.168306 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.168422 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzx66\" (UniqueName: \"kubernetes.io/projected/db774f6a-d370-4725-a77d-35da37c572d1-kube-api-access-jzx66\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.168517 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db774f6a-d370-4725-a77d-35da37c572d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.168996 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-utilities" (OuterVolumeSpecName: "utilities") pod "67c225e2-f31a-4572-814e-804233b6c1fd" (UID: "67c225e2-f31a-4572-814e-804233b6c1fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.169078 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-utilities" (OuterVolumeSpecName: "utilities") pod "8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" (UID: "8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.170101 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-utilities" (OuterVolumeSpecName: "utilities") pod "f245cd57-57f1-40a7-b0c5-edb85e06871d" (UID: "f245cd57-57f1-40a7-b0c5-edb85e06871d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.170539 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-utilities" (OuterVolumeSpecName: "utilities") pod "bc6967c6-5420-404e-88dd-95664165decf" (UID: "bc6967c6-5420-404e-88dd-95664165decf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.171103 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7708819f-d97a-47a7-b9a6-51fe7a7f503f-kube-api-access-6nbww" (OuterVolumeSpecName: "kube-api-access-6nbww") pod "7708819f-d97a-47a7-b9a6-51fe7a7f503f" (UID: "7708819f-d97a-47a7-b9a6-51fe7a7f503f"). InnerVolumeSpecName "kube-api-access-6nbww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.171325 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-utilities" (OuterVolumeSpecName: "utilities") pod "eddf9593-c223-4b82-9774-6059413ae2d0" (UID: "eddf9593-c223-4b82-9774-6059413ae2d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.171457 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-utilities" (OuterVolumeSpecName: "utilities") pod "7708819f-d97a-47a7-b9a6-51fe7a7f503f" (UID: "7708819f-d97a-47a7-b9a6-51fe7a7f503f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.171438 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f245cd57-57f1-40a7-b0c5-edb85e06871d-kube-api-access-nk5bg" (OuterVolumeSpecName: "kube-api-access-nk5bg") pod "f245cd57-57f1-40a7-b0c5-edb85e06871d" (UID: "f245cd57-57f1-40a7-b0c5-edb85e06871d"). InnerVolumeSpecName "kube-api-access-nk5bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.184689 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-kube-api-access-mwq7g" (OuterVolumeSpecName: "kube-api-access-mwq7g") pod "8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" (UID: "8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb"). InnerVolumeSpecName "kube-api-access-mwq7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.185642 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc6967c6-5420-404e-88dd-95664165decf-kube-api-access-lgzbc" (OuterVolumeSpecName: "kube-api-access-lgzbc") pod "bc6967c6-5420-404e-88dd-95664165decf" (UID: "bc6967c6-5420-404e-88dd-95664165decf"). InnerVolumeSpecName "kube-api-access-lgzbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.185754 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67c225e2-f31a-4572-814e-804233b6c1fd" (UID: "67c225e2-f31a-4572-814e-804233b6c1fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.186890 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eddf9593-c223-4b82-9774-6059413ae2d0-kube-api-access-ddcbn" (OuterVolumeSpecName: "kube-api-access-ddcbn") pod "eddf9593-c223-4b82-9774-6059413ae2d0" (UID: "eddf9593-c223-4b82-9774-6059413ae2d0"). InnerVolumeSpecName "kube-api-access-ddcbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.193831 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc6967c6-5420-404e-88dd-95664165decf" (UID: "bc6967c6-5420-404e-88dd-95664165decf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.197280 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" (UID: "8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.198114 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c225e2-f31a-4572-814e-804233b6c1fd-kube-api-access-4x4sh" (OuterVolumeSpecName: "kube-api-access-4x4sh") pod "67c225e2-f31a-4572-814e-804233b6c1fd" (UID: "67c225e2-f31a-4572-814e-804233b6c1fd"). InnerVolumeSpecName "kube-api-access-4x4sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.235754 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7708819f-d97a-47a7-b9a6-51fe7a7f503f" (UID: "7708819f-d97a-47a7-b9a6-51fe7a7f503f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.246002 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f245cd57-57f1-40a7-b0c5-edb85e06871d" (UID: "f245cd57-57f1-40a7-b0c5-edb85e06871d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270381 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270417 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270428 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270437 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f245cd57-57f1-40a7-b0c5-edb85e06871d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270448 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270457 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7708819f-d97a-47a7-b9a6-51fe7a7f503f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270467 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270476 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgzbc\" (UniqueName: \"kubernetes.io/projected/bc6967c6-5420-404e-88dd-95664165decf-kube-api-access-lgzbc\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270486 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwq7g\" (UniqueName: \"kubernetes.io/projected/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-kube-api-access-mwq7g\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270497 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk5bg\" (UniqueName: \"kubernetes.io/projected/f245cd57-57f1-40a7-b0c5-edb85e06871d-kube-api-access-nk5bg\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270508 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270518 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc6967c6-5420-404e-88dd-95664165decf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270530 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c225e2-f31a-4572-814e-804233b6c1fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270541 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x4sh\" (UniqueName: \"kubernetes.io/projected/67c225e2-f31a-4572-814e-804233b6c1fd-kube-api-access-4x4sh\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270552 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddcbn\" (UniqueName: \"kubernetes.io/projected/eddf9593-c223-4b82-9774-6059413ae2d0-kube-api-access-ddcbn\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270564 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nbww\" (UniqueName: \"kubernetes.io/projected/7708819f-d97a-47a7-b9a6-51fe7a7f503f-kube-api-access-6nbww\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.270576 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.393557 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eddf9593-c223-4b82-9774-6059413ae2d0" (UID: "eddf9593-c223-4b82-9774-6059413ae2d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.473700 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eddf9593-c223-4b82-9774-6059413ae2d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.480417 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n466t_8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb/registry-server/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.481184 4834 generic.go:334] "Generic (PLEG): container finished" podID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerID="313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631" exitCode=1 Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.481250 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n466t" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.481268 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n466t" event={"ID":"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb","Type":"ContainerDied","Data":"313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.481303 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n466t" event={"ID":"8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb","Type":"ContainerDied","Data":"a529d03002b415dc00b6736bfdcfd96365a51922ed78b276a61507d85458c3fa"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.481323 4834 scope.go:117] "RemoveContainer" containerID="313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.482530 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dsddw_67c225e2-f31a-4572-814e-804233b6c1fd/extract-content/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.483219 4834 generic.go:334] "Generic (PLEG): container finished" podID="67c225e2-f31a-4572-814e-804233b6c1fd" containerID="97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c" exitCode=2 Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.483289 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsddw" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.483288 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsddw" event={"ID":"67c225e2-f31a-4572-814e-804233b6c1fd","Type":"ContainerDied","Data":"97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.483401 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsddw" event={"ID":"67c225e2-f31a-4572-814e-804233b6c1fd","Type":"ContainerDied","Data":"948ac30da8c72a42326227ac1d4db168b7c941a9ebd910312f9ac130ff631ca5"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.487100 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bglrk_eddf9593-c223-4b82-9774-6059413ae2d0/registry-server/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.487671 4834 generic.go:334] "Generic (PLEG): container finished" podID="eddf9593-c223-4b82-9774-6059413ae2d0" containerID="e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab" exitCode=1 Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.487772 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bglrk" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.489181 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bglrk" event={"ID":"eddf9593-c223-4b82-9774-6059413ae2d0","Type":"ContainerDied","Data":"e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.489255 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bglrk" event={"ID":"eddf9593-c223-4b82-9774-6059413ae2d0","Type":"ContainerDied","Data":"200055577f91a865651ef5347133df20487a4829e6a8a4ca8e638d6580fc713e"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.490934 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8s2bs_bc6967c6-5420-404e-88dd-95664165decf/extract-content/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.492097 4834 generic.go:334] "Generic (PLEG): container finished" podID="bc6967c6-5420-404e-88dd-95664165decf" containerID="29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7" exitCode=2 Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.492156 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2bs" event={"ID":"bc6967c6-5420-404e-88dd-95664165decf","Type":"ContainerDied","Data":"29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.492179 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s2bs" event={"ID":"bc6967c6-5420-404e-88dd-95664165decf","Type":"ContainerDied","Data":"186bc3d1fdcaaeb50660068edf5813d24bb57e8e1d8742b1f32673d32f808abf"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.492264 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s2bs" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.496380 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ltsnd_db774f6a-d370-4725-a77d-35da37c572d1/registry-server/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.497626 4834 generic.go:334] "Generic (PLEG): container finished" podID="db774f6a-d370-4725-a77d-35da37c572d1" containerID="dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62" exitCode=2 Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.497690 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltsnd" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.497694 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltsnd" event={"ID":"db774f6a-d370-4725-a77d-35da37c572d1","Type":"ContainerDied","Data":"dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.497757 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltsnd" event={"ID":"db774f6a-d370-4725-a77d-35da37c572d1","Type":"ContainerDied","Data":"a6d984d4aa9eb5e80426d6f99ea2681a896b25399007fa8412b73f9960d10701"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.499267 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hzgrw_7708819f-d97a-47a7-b9a6-51fe7a7f503f/registry-server/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.499840 4834 generic.go:334] "Generic (PLEG): container finished" podID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerID="bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40" exitCode=1 Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.499900 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrw" event={"ID":"7708819f-d97a-47a7-b9a6-51fe7a7f503f","Type":"ContainerDied","Data":"bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.499961 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzgrw" event={"ID":"7708819f-d97a-47a7-b9a6-51fe7a7f503f","Type":"ContainerDied","Data":"639204a681f088c6649a51dcd299d43adc02c5133a7273c49acadb44c6d46b62"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.500043 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzgrw" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.504329 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-769ws_f245cd57-57f1-40a7-b0c5-edb85e06871d/registry-server/0.log" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.506065 4834 generic.go:334] "Generic (PLEG): container finished" podID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerID="c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0" exitCode=1 Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.506130 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-769ws" event={"ID":"f245cd57-57f1-40a7-b0c5-edb85e06871d","Type":"ContainerDied","Data":"c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.506156 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-769ws" event={"ID":"f245cd57-57f1-40a7-b0c5-edb85e06871d","Type":"ContainerDied","Data":"0f92a4c8b56e272750a0b520d638e1744e98bc0cebd8aa67dd49f56a97a7a681"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.506241 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-769ws" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.507517 4834 scope.go:117] "RemoveContainer" containerID="9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.509134 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" event={"ID":"494aef4f-fcf4-422a-be16-b39449045941","Type":"ContainerStarted","Data":"43b069ad5c5f0e3e931cb75fbc5d51b56f91ade726b282a6967ee9e1e9329be5"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.509181 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" event={"ID":"494aef4f-fcf4-422a-be16-b39449045941","Type":"ContainerStarted","Data":"d998705bacc04c9af9595d951b5946c95b560720d4be62b2d0ff65384b8e5548"} Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.509611 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.520396 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.529572 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lt7k4" podStartSLOduration=6.529549962 podStartE2EDuration="6.529549962s" podCreationTimestamp="2026-01-21 14:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:34:51.526120001 +0000 UTC m=+237.500469046" watchObservedRunningTime="2026-01-21 14:34:51.529549962 +0000 UTC m=+237.503898997" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.537284 4834 scope.go:117] "RemoveContainer" containerID="926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.542985 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n466t"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.546346 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n466t"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.583345 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsddw"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.590081 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsddw"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.591781 4834 scope.go:117] "RemoveContainer" containerID="313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.592327 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631\": container with ID starting with 313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631 not found: ID does not exist" containerID="313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.592373 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631"} err="failed to get container status \"313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631\": rpc error: code = NotFound desc = could not find container \"313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631\": container with ID starting with 313c52583e56e4bfc97961c2e54534fe8952c151c4dffdca7a98111826c57631 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.592407 4834 scope.go:117] "RemoveContainer" containerID="9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.592976 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b\": container with ID starting with 9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b not found: ID does not exist" containerID="9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.593012 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b"} err="failed to get container status \"9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b\": rpc error: code = NotFound desc = could not find container \"9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b\": container with ID starting with 9837b5490fe250821b53cc3fb2c7e8d4e7136548f4d71a155fc7c4ff6e91534b not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.593027 4834 scope.go:117] "RemoveContainer" containerID="926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.594651 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955\": container with ID starting with 926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955 not found: ID does not exist" containerID="926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.594688 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955"} err="failed to get container status \"926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955\": rpc error: code = NotFound desc = could not find container \"926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955\": container with ID starting with 926e2fea34a73d45ded9ecfcaee0d22c12960977398b2ac07da5863f70bde955 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.594707 4834 scope.go:117] "RemoveContainer" containerID="97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.613744 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bglrk"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.618485 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bglrk"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.627099 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzgrw"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.628691 4834 scope.go:117] "RemoveContainer" containerID="d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.632541 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzgrw"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.639661 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ltsnd"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.643705 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ltsnd"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.648775 4834 scope.go:117] "RemoveContainer" containerID="97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.649267 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c\": container with ID starting with 97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c not found: ID does not exist" containerID="97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.649315 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c"} err="failed to get container status \"97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c\": rpc error: code = NotFound desc = could not find container \"97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c\": container with ID starting with 97046c6716b4bb39603eb3e063f70515364bdefa43a1e696bdf769c04261423c not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.649350 4834 scope.go:117] "RemoveContainer" containerID="d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.649755 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285\": container with ID starting with d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285 not found: ID does not exist" containerID="d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.649795 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285"} err="failed to get container status \"d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285\": rpc error: code = NotFound desc = could not find container \"d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285\": container with ID starting with d029304cf16ea8fc05936e54b9c0d4da559db559b4e8a710d3a817e68aa55285 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.649820 4834 scope.go:117] "RemoveContainer" containerID="e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.675960 4834 scope.go:117] "RemoveContainer" containerID="f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.681479 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s2bs"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.695540 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8s2bs"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.696368 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-769ws"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.700516 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-769ws"] Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.706019 4834 scope.go:117] "RemoveContainer" containerID="c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.719351 4834 scope.go:117] "RemoveContainer" containerID="e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.719727 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab\": container with ID starting with e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab not found: ID does not exist" containerID="e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.719755 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab"} err="failed to get container status \"e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab\": rpc error: code = NotFound desc = could not find container \"e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab\": container with ID starting with e0857f030eaed724fe502d9f7c016c0709e00f0bf218e9dd916d62cc5c7897ab not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.719778 4834 scope.go:117] "RemoveContainer" containerID="f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.720001 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f\": container with ID starting with f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f not found: ID does not exist" containerID="f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.720021 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f"} err="failed to get container status \"f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f\": rpc error: code = NotFound desc = could not find container \"f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f\": container with ID starting with f55481586e49d7cf7221af65b8202fe830d0d2443bd94bebf58572e37272d22f not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.720033 4834 scope.go:117] "RemoveContainer" containerID="c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.720328 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2\": container with ID starting with c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2 not found: ID does not exist" containerID="c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.720348 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2"} err="failed to get container status \"c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2\": rpc error: code = NotFound desc = could not find container \"c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2\": container with ID starting with c753d452fc62597969f1660ba8381eb4e1114a3e9d3c20d2a72630a0be070ce2 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.720359 4834 scope.go:117] "RemoveContainer" containerID="29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.730796 4834 scope.go:117] "RemoveContainer" containerID="32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.745261 4834 scope.go:117] "RemoveContainer" containerID="29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.745777 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7\": container with ID starting with 29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7 not found: ID does not exist" containerID="29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.745817 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7"} err="failed to get container status \"29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7\": rpc error: code = NotFound desc = could not find container \"29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7\": container with ID starting with 29d4036885f6ad84f9781b3c20ee3a791038ecf31878bf2b3fb64387b52379c7 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.746027 4834 scope.go:117] "RemoveContainer" containerID="32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.746296 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3\": container with ID starting with 32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3 not found: ID does not exist" containerID="32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.746325 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3"} err="failed to get container status \"32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3\": rpc error: code = NotFound desc = could not find container \"32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3\": container with ID starting with 32121f625303016ce1e0eee5504134ead36207fa0629eb8c255ffeb6d35740e3 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.746344 4834 scope.go:117] "RemoveContainer" containerID="dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.763020 4834 scope.go:117] "RemoveContainer" containerID="86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.779001 4834 scope.go:117] "RemoveContainer" containerID="c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.793962 4834 scope.go:117] "RemoveContainer" containerID="dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.794355 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62\": container with ID starting with dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62 not found: ID does not exist" containerID="dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.794391 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62"} err="failed to get container status \"dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62\": rpc error: code = NotFound desc = could not find container \"dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62\": container with ID starting with dc09829b48bd0144a63aae32eab1bae69c6981ac527957a9175ba694057f0a62 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.794421 4834 scope.go:117] "RemoveContainer" containerID="86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.794791 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108\": container with ID starting with 86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108 not found: ID does not exist" containerID="86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.794813 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108"} err="failed to get container status \"86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108\": rpc error: code = NotFound desc = could not find container \"86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108\": container with ID starting with 86c25ac5f6cbf74fdd49fea73bf7e9accedbd82d48f3e23a88d7f021ae93e108 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.794829 4834 scope.go:117] "RemoveContainer" containerID="c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.795118 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389\": container with ID starting with c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389 not found: ID does not exist" containerID="c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.795143 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389"} err="failed to get container status \"c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389\": rpc error: code = NotFound desc = could not find container \"c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389\": container with ID starting with c4fbd5c99e4de64980885b0fa5bfeae395f6b1a9b1466aee996fc247ef8e0389 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.795159 4834 scope.go:117] "RemoveContainer" containerID="bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.806020 4834 scope.go:117] "RemoveContainer" containerID="275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.820764 4834 scope.go:117] "RemoveContainer" containerID="21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.833785 4834 scope.go:117] "RemoveContainer" containerID="bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.834287 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40\": container with ID starting with bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40 not found: ID does not exist" containerID="bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.834315 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40"} err="failed to get container status \"bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40\": rpc error: code = NotFound desc = could not find container \"bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40\": container with ID starting with bb61edd586d9179b7717d4a31794dfd8e3e680d088892897309f8c25bb94be40 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.834338 4834 scope.go:117] "RemoveContainer" containerID="275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.834586 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1\": container with ID starting with 275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1 not found: ID does not exist" containerID="275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.834608 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1"} err="failed to get container status \"275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1\": rpc error: code = NotFound desc = could not find container \"275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1\": container with ID starting with 275cf1ec8a8d67bcdc6e1d6fa3d6552b89cbeb16b6741d61837a2a54e9dde3e1 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.834622 4834 scope.go:117] "RemoveContainer" containerID="21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.834816 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400\": container with ID starting with 21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400 not found: ID does not exist" containerID="21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.834837 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400"} err="failed to get container status \"21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400\": rpc error: code = NotFound desc = could not find container \"21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400\": container with ID starting with 21d9751715ee0c61c24ff04e49bea0924e556304cdb4d07bb210e1969242a400 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.834853 4834 scope.go:117] "RemoveContainer" containerID="c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.849594 4834 scope.go:117] "RemoveContainer" containerID="6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.863841 4834 scope.go:117] "RemoveContainer" containerID="2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.877624 4834 scope.go:117] "RemoveContainer" containerID="c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.878118 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0\": container with ID starting with c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0 not found: ID does not exist" containerID="c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.878155 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0"} err="failed to get container status \"c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0\": rpc error: code = NotFound desc = could not find container \"c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0\": container with ID starting with c9263c246be3cc25bf9777ff4583b33ea802fa170253a3f218e682fd867419d0 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.878186 4834 scope.go:117] "RemoveContainer" containerID="6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.878484 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30\": container with ID starting with 6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30 not found: ID does not exist" containerID="6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.878511 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30"} err="failed to get container status \"6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30\": rpc error: code = NotFound desc = could not find container \"6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30\": container with ID starting with 6dd2769aec5e9b02cbedf20a33bb90f3ce33faa25e4d169cb0cfb7dbcc530f30 not found: ID does not exist" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.878530 4834 scope.go:117] "RemoveContainer" containerID="2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0" Jan 21 14:34:51 crc kubenswrapper[4834]: E0121 14:34:51.878793 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0\": container with ID starting with 2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0 not found: ID does not exist" containerID="2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0" Jan 21 14:34:51 crc kubenswrapper[4834]: I0121 14:34:51.878818 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0"} err="failed to get container status \"2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0\": rpc error: code = NotFound desc = could not find container \"2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0\": container with ID starting with 2cde9401f7eb40fe86a4270b9071626bd010710fc745257897307f33207402d0 not found: ID does not exist" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296057 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n2fjs"] Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296295 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296309 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296318 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296323 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296332 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db774f6a-d370-4725-a77d-35da37c572d1" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296338 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="db774f6a-d370-4725-a77d-35da37c572d1" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296346 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c225e2-f31a-4572-814e-804233b6c1fd" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296353 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c225e2-f31a-4572-814e-804233b6c1fd" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296360 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296366 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296377 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296383 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296390 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296395 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296402 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8088483c-12ae-4825-a95a-42bec2973b76" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296408 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8088483c-12ae-4825-a95a-42bec2973b76" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296418 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c225e2-f31a-4572-814e-804233b6c1fd" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296425 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c225e2-f31a-4572-814e-804233b6c1fd" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296433 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296438 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296446 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7838db-42fa-496d-bdec-712d5fcc46c6" containerName="marketplace-operator" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296452 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7838db-42fa-496d-bdec-712d5fcc46c6" containerName="marketplace-operator" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296460 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6967c6-5420-404e-88dd-95664165decf" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296465 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6967c6-5420-404e-88dd-95664165decf" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296472 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db774f6a-d370-4725-a77d-35da37c572d1" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296478 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="db774f6a-d370-4725-a77d-35da37c572d1" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296485 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8088483c-12ae-4825-a95a-42bec2973b76" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296491 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8088483c-12ae-4825-a95a-42bec2973b76" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296499 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296504 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296511 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296517 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296524 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296529 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296536 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296542 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296553 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6967c6-5420-404e-88dd-95664165decf" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296559 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6967c6-5420-404e-88dd-95664165decf" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296568 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8088483c-12ae-4825-a95a-42bec2973b76" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296574 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8088483c-12ae-4825-a95a-42bec2973b76" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296582 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296589 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296598 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db774f6a-d370-4725-a77d-35da37c572d1" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296604 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="db774f6a-d370-4725-a77d-35da37c572d1" containerName="extract-utilities" Jan 21 14:34:52 crc kubenswrapper[4834]: E0121 14:34:52.296613 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296618 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296704 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296719 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296730 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296742 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c225e2-f31a-4572-814e-804233b6c1fd" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296844 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc6967c6-5420-404e-88dd-95664165decf" containerName="extract-content" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296854 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296863 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7838db-42fa-496d-bdec-712d5fcc46c6" containerName="marketplace-operator" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296872 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="db774f6a-d370-4725-a77d-35da37c572d1" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.296883 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8088483c-12ae-4825-a95a-42bec2973b76" containerName="registry-server" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.297762 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.300285 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.306888 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2fjs"] Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.332116 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c225e2-f31a-4572-814e-804233b6c1fd" path="/var/lib/kubelet/pods/67c225e2-f31a-4572-814e-804233b6c1fd/volumes" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.332718 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7708819f-d97a-47a7-b9a6-51fe7a7f503f" path="/var/lib/kubelet/pods/7708819f-d97a-47a7-b9a6-51fe7a7f503f/volumes" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.333348 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb" path="/var/lib/kubelet/pods/8fa83442-f8b0-4acb-8434-c6cb2c9cbbeb/volumes" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.334438 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc6967c6-5420-404e-88dd-95664165decf" path="/var/lib/kubelet/pods/bc6967c6-5420-404e-88dd-95664165decf/volumes" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.335014 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da7838db-42fa-496d-bdec-712d5fcc46c6" path="/var/lib/kubelet/pods/da7838db-42fa-496d-bdec-712d5fcc46c6/volumes" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.335457 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db774f6a-d370-4725-a77d-35da37c572d1" path="/var/lib/kubelet/pods/db774f6a-d370-4725-a77d-35da37c572d1/volumes" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.336466 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eddf9593-c223-4b82-9774-6059413ae2d0" path="/var/lib/kubelet/pods/eddf9593-c223-4b82-9774-6059413ae2d0/volumes" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.337027 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f245cd57-57f1-40a7-b0c5-edb85e06871d" path="/var/lib/kubelet/pods/f245cd57-57f1-40a7-b0c5-edb85e06871d/volumes" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.393296 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xng\" (UniqueName: \"kubernetes.io/projected/0c18088d-a345-4848-a9d4-407441f5bb99-kube-api-access-r8xng\") pod \"community-operators-n2fjs\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.393442 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-catalog-content\") pod \"community-operators-n2fjs\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.393493 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-utilities\") pod \"community-operators-n2fjs\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.494816 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8xng\" (UniqueName: \"kubernetes.io/projected/0c18088d-a345-4848-a9d4-407441f5bb99-kube-api-access-r8xng\") pod \"community-operators-n2fjs\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.494920 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-catalog-content\") pod \"community-operators-n2fjs\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.494966 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-utilities\") pod \"community-operators-n2fjs\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.496095 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-catalog-content\") pod \"community-operators-n2fjs\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.496241 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-utilities\") pod \"community-operators-n2fjs\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.514798 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8xng\" (UniqueName: \"kubernetes.io/projected/0c18088d-a345-4848-a9d4-407441f5bb99-kube-api-access-r8xng\") pod \"community-operators-n2fjs\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:52 crc kubenswrapper[4834]: I0121 14:34:52.622285 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:34:53 crc kubenswrapper[4834]: I0121 14:34:53.010323 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n2fjs"] Jan 21 14:34:53 crc kubenswrapper[4834]: I0121 14:34:53.532323 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2fjs" event={"ID":"0c18088d-a345-4848-a9d4-407441f5bb99","Type":"ContainerStarted","Data":"379f26697bf9de53971efb08c3383a34934b16a094739e945189c0c2449f6da4"} Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.099111 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8bbsd"] Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.100796 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.104261 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.114804 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bbsd"] Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.219391 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3416206-7706-4072-af2f-e5bb2606aef0-utilities\") pod \"redhat-marketplace-8bbsd\" (UID: \"d3416206-7706-4072-af2f-e5bb2606aef0\") " pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.219492 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3416206-7706-4072-af2f-e5bb2606aef0-catalog-content\") pod \"redhat-marketplace-8bbsd\" (UID: \"d3416206-7706-4072-af2f-e5bb2606aef0\") " pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.219707 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrzf\" (UniqueName: \"kubernetes.io/projected/d3416206-7706-4072-af2f-e5bb2606aef0-kube-api-access-fsrzf\") pod \"redhat-marketplace-8bbsd\" (UID: \"d3416206-7706-4072-af2f-e5bb2606aef0\") " pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.321692 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsrzf\" (UniqueName: \"kubernetes.io/projected/d3416206-7706-4072-af2f-e5bb2606aef0-kube-api-access-fsrzf\") pod \"redhat-marketplace-8bbsd\" (UID: \"d3416206-7706-4072-af2f-e5bb2606aef0\") " pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.321798 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3416206-7706-4072-af2f-e5bb2606aef0-utilities\") pod \"redhat-marketplace-8bbsd\" (UID: \"d3416206-7706-4072-af2f-e5bb2606aef0\") " pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.321859 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3416206-7706-4072-af2f-e5bb2606aef0-catalog-content\") pod \"redhat-marketplace-8bbsd\" (UID: \"d3416206-7706-4072-af2f-e5bb2606aef0\") " pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.322649 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3416206-7706-4072-af2f-e5bb2606aef0-catalog-content\") pod \"redhat-marketplace-8bbsd\" (UID: \"d3416206-7706-4072-af2f-e5bb2606aef0\") " pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.325237 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3416206-7706-4072-af2f-e5bb2606aef0-utilities\") pod \"redhat-marketplace-8bbsd\" (UID: \"d3416206-7706-4072-af2f-e5bb2606aef0\") " pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.342810 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsrzf\" (UniqueName: \"kubernetes.io/projected/d3416206-7706-4072-af2f-e5bb2606aef0-kube-api-access-fsrzf\") pod \"redhat-marketplace-8bbsd\" (UID: \"d3416206-7706-4072-af2f-e5bb2606aef0\") " pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.433223 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.442459 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.543241 4834 generic.go:334] "Generic (PLEG): container finished" podID="0c18088d-a345-4848-a9d4-407441f5bb99" containerID="11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250" exitCode=0 Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.543669 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2fjs" event={"ID":"0c18088d-a345-4848-a9d4-407441f5bb99","Type":"ContainerDied","Data":"11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250"} Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.638569 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8bbsd"] Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.705504 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7g4k"] Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.707874 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.714621 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.726021 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7g4k"] Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.832217 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c693e3-bcc4-4d1f-80d4-cf7aed592bc7-utilities\") pod \"redhat-operators-p7g4k\" (UID: \"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7\") " pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.832741 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c693e3-bcc4-4d1f-80d4-cf7aed592bc7-catalog-content\") pod \"redhat-operators-p7g4k\" (UID: \"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7\") " pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.832784 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvn5f\" (UniqueName: \"kubernetes.io/projected/81c693e3-bcc4-4d1f-80d4-cf7aed592bc7-kube-api-access-gvn5f\") pod \"redhat-operators-p7g4k\" (UID: \"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7\") " pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.934851 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvn5f\" (UniqueName: \"kubernetes.io/projected/81c693e3-bcc4-4d1f-80d4-cf7aed592bc7-kube-api-access-gvn5f\") pod \"redhat-operators-p7g4k\" (UID: \"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7\") " pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.935092 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c693e3-bcc4-4d1f-80d4-cf7aed592bc7-utilities\") pod \"redhat-operators-p7g4k\" (UID: \"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7\") " pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.935206 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c693e3-bcc4-4d1f-80d4-cf7aed592bc7-catalog-content\") pod \"redhat-operators-p7g4k\" (UID: \"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7\") " pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.936144 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c693e3-bcc4-4d1f-80d4-cf7aed592bc7-utilities\") pod \"redhat-operators-p7g4k\" (UID: \"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7\") " pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.936365 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c693e3-bcc4-4d1f-80d4-cf7aed592bc7-catalog-content\") pod \"redhat-operators-p7g4k\" (UID: \"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7\") " pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:54 crc kubenswrapper[4834]: I0121 14:34:54.953622 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvn5f\" (UniqueName: \"kubernetes.io/projected/81c693e3-bcc4-4d1f-80d4-cf7aed592bc7-kube-api-access-gvn5f\") pod \"redhat-operators-p7g4k\" (UID: \"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7\") " pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.064451 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.161545 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mcc7f" Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.247008 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qg6w9"] Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.500708 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7g4k"] Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.549561 4834 generic.go:334] "Generic (PLEG): container finished" podID="0c18088d-a345-4848-a9d4-407441f5bb99" containerID="c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548" exitCode=0 Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.549641 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2fjs" event={"ID":"0c18088d-a345-4848-a9d4-407441f5bb99","Type":"ContainerDied","Data":"c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548"} Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.551299 4834 generic.go:334] "Generic (PLEG): container finished" podID="d3416206-7706-4072-af2f-e5bb2606aef0" containerID="445e7f3498c9cf82b34d2e81762c8edcccb191043c1e2f0ee85bd72cdf6bf9f2" exitCode=0 Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.551359 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bbsd" event={"ID":"d3416206-7706-4072-af2f-e5bb2606aef0","Type":"ContainerDied","Data":"445e7f3498c9cf82b34d2e81762c8edcccb191043c1e2f0ee85bd72cdf6bf9f2"} Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.551439 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bbsd" event={"ID":"d3416206-7706-4072-af2f-e5bb2606aef0","Type":"ContainerStarted","Data":"4ab50b32172a3667714d10acc6c226453869eb294fdc94d271e868883505d084"} Jan 21 14:34:55 crc kubenswrapper[4834]: I0121 14:34:55.553293 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g4k" event={"ID":"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7","Type":"ContainerStarted","Data":"952d68e099f9d058a79e8e9243bcd724991069220385a9cd0e35950d46beda2b"} Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.502888 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2qf4g"] Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.504279 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.506123 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.509096 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2qf4g"] Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.560188 4834 generic.go:334] "Generic (PLEG): container finished" podID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" containerID="b1c4850118508249026218a1c6be09f058aad2eb96f97ba603487c99d8940632" exitCode=0 Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.560246 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g4k" event={"ID":"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7","Type":"ContainerDied","Data":"b1c4850118508249026218a1c6be09f058aad2eb96f97ba603487c99d8940632"} Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.565780 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2fjs" event={"ID":"0c18088d-a345-4848-a9d4-407441f5bb99","Type":"ContainerStarted","Data":"3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef"} Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.567860 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bbsd" event={"ID":"d3416206-7706-4072-af2f-e5bb2606aef0","Type":"ContainerStarted","Data":"8d7fc19f60c358bac2acc43cc01202278cee8381230485ba884ad65a39f90320"} Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.597657 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n2fjs" podStartSLOduration=3.128758479 podStartE2EDuration="4.597632243s" podCreationTimestamp="2026-01-21 14:34:52 +0000 UTC" firstStartedPulling="2026-01-21 14:34:54.54597726 +0000 UTC m=+240.520326295" lastFinishedPulling="2026-01-21 14:34:56.014851024 +0000 UTC m=+241.989200059" observedRunningTime="2026-01-21 14:34:56.595591156 +0000 UTC m=+242.569940201" watchObservedRunningTime="2026-01-21 14:34:56.597632243 +0000 UTC m=+242.571981288" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.657492 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-utilities\") pod \"certified-operators-2qf4g\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.657601 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbrl5\" (UniqueName: \"kubernetes.io/projected/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-kube-api-access-mbrl5\") pod \"certified-operators-2qf4g\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.657635 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-catalog-content\") pod \"certified-operators-2qf4g\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.758496 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-catalog-content\") pod \"certified-operators-2qf4g\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.758571 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-utilities\") pod \"certified-operators-2qf4g\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.758646 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbrl5\" (UniqueName: \"kubernetes.io/projected/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-kube-api-access-mbrl5\") pod \"certified-operators-2qf4g\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.759510 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-catalog-content\") pod \"certified-operators-2qf4g\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.759598 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-utilities\") pod \"certified-operators-2qf4g\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.778598 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbrl5\" (UniqueName: \"kubernetes.io/projected/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-kube-api-access-mbrl5\") pod \"certified-operators-2qf4g\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:56 crc kubenswrapper[4834]: I0121 14:34:56.819460 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:34:57 crc kubenswrapper[4834]: I0121 14:34:57.216351 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2qf4g"] Jan 21 14:34:57 crc kubenswrapper[4834]: W0121 14:34:57.224775 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25a6ac4b_5bd7_468a_888d_0b7fcc3d290e.slice/crio-2e413d42bf396d4a7d00f02209427f0ed2f91d82ac22126c4940b58559da1980 WatchSource:0}: Error finding container 2e413d42bf396d4a7d00f02209427f0ed2f91d82ac22126c4940b58559da1980: Status 404 returned error can't find the container with id 2e413d42bf396d4a7d00f02209427f0ed2f91d82ac22126c4940b58559da1980 Jan 21 14:34:57 crc kubenswrapper[4834]: I0121 14:34:57.574419 4834 generic.go:334] "Generic (PLEG): container finished" podID="d3416206-7706-4072-af2f-e5bb2606aef0" containerID="8d7fc19f60c358bac2acc43cc01202278cee8381230485ba884ad65a39f90320" exitCode=0 Jan 21 14:34:57 crc kubenswrapper[4834]: I0121 14:34:57.574773 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bbsd" event={"ID":"d3416206-7706-4072-af2f-e5bb2606aef0","Type":"ContainerDied","Data":"8d7fc19f60c358bac2acc43cc01202278cee8381230485ba884ad65a39f90320"} Jan 21 14:34:57 crc kubenswrapper[4834]: I0121 14:34:57.574801 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8bbsd" event={"ID":"d3416206-7706-4072-af2f-e5bb2606aef0","Type":"ContainerStarted","Data":"c6128e17a995f20a78532cfd4d33b6e72bfaa11d7182b5d4fd1d977e12d06928"} Jan 21 14:34:57 crc kubenswrapper[4834]: I0121 14:34:57.577190 4834 generic.go:334] "Generic (PLEG): container finished" podID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerID="baba8b3471447007de786be507a8db435ca790e3533a61bfaacfc23219fcfe7d" exitCode=0 Jan 21 14:34:57 crc kubenswrapper[4834]: I0121 14:34:57.577248 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qf4g" event={"ID":"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e","Type":"ContainerDied","Data":"baba8b3471447007de786be507a8db435ca790e3533a61bfaacfc23219fcfe7d"} Jan 21 14:34:57 crc kubenswrapper[4834]: I0121 14:34:57.577273 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qf4g" event={"ID":"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e","Type":"ContainerStarted","Data":"2e413d42bf396d4a7d00f02209427f0ed2f91d82ac22126c4940b58559da1980"} Jan 21 14:34:57 crc kubenswrapper[4834]: I0121 14:34:57.582801 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g4k" event={"ID":"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7","Type":"ContainerStarted","Data":"e7d73a2252a142e53d2757fb02a352a35db3653955a4100c24c6add27b1d6859"} Jan 21 14:34:57 crc kubenswrapper[4834]: I0121 14:34:57.593415 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8bbsd" podStartSLOduration=2.157710486 podStartE2EDuration="3.593397804s" podCreationTimestamp="2026-01-21 14:34:54 +0000 UTC" firstStartedPulling="2026-01-21 14:34:55.552418065 +0000 UTC m=+241.526767120" lastFinishedPulling="2026-01-21 14:34:56.988105393 +0000 UTC m=+242.962454438" observedRunningTime="2026-01-21 14:34:57.592097352 +0000 UTC m=+243.566446417" watchObservedRunningTime="2026-01-21 14:34:57.593397804 +0000 UTC m=+243.567746859" Jan 21 14:34:58 crc kubenswrapper[4834]: I0121 14:34:58.588299 4834 generic.go:334] "Generic (PLEG): container finished" podID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" containerID="e7d73a2252a142e53d2757fb02a352a35db3653955a4100c24c6add27b1d6859" exitCode=0 Jan 21 14:34:58 crc kubenswrapper[4834]: I0121 14:34:58.588417 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g4k" event={"ID":"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7","Type":"ContainerDied","Data":"e7d73a2252a142e53d2757fb02a352a35db3653955a4100c24c6add27b1d6859"} Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.714681 4834 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.715057 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454" gracePeriod=15 Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.715090 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a" gracePeriod=15 Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.715157 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7" gracePeriod=15 Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.715214 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7" gracePeriod=15 Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.715433 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2" gracePeriod=15 Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716428 4834 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:34:59 crc kubenswrapper[4834]: E0121 14:34:59.716696 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716716 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 14:34:59 crc kubenswrapper[4834]: E0121 14:34:59.716730 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716738 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 14:34:59 crc kubenswrapper[4834]: E0121 14:34:59.716749 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716757 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:34:59 crc kubenswrapper[4834]: E0121 14:34:59.716765 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716772 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 14:34:59 crc kubenswrapper[4834]: E0121 14:34:59.716789 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716796 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 14:34:59 crc kubenswrapper[4834]: E0121 14:34:59.716808 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716816 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716948 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716960 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716971 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716984 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.716991 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.718920 4834 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.719542 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.723705 4834 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.758074 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.797498 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.797569 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.797596 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.797620 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.797646 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.797808 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.797906 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.798043 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899510 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899578 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899599 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899623 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899644 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899660 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899656 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899704 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899723 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899681 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899761 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899764 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899769 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899795 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899854 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:34:59 crc kubenswrapper[4834]: I0121 14:34:59.899969 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.057637 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.601446 4834 generic.go:334] "Generic (PLEG): container finished" podID="ff08c980-aca5-4de9-ad83-10c979bc28fb" containerID="08843f096fc44b62dc9f3c007d9a095d0890b97a03e7d35a377fd82247b4e89f" exitCode=0 Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.601573 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ff08c980-aca5-4de9-ad83-10c979bc28fb","Type":"ContainerDied","Data":"08843f096fc44b62dc9f3c007d9a095d0890b97a03e7d35a377fd82247b4e89f"} Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.603171 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.603619 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.605947 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.607098 4834 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a" exitCode=0 Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.607124 4834 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7" exitCode=0 Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.607132 4834 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7" exitCode=0 Jan 21 14:35:00 crc kubenswrapper[4834]: I0121 14:35:00.607143 4834 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2" exitCode=2 Jan 21 14:35:01 crc kubenswrapper[4834]: E0121 14:35:01.066065 4834 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.45:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-2qf4g.188cc5ad23609322 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-2qf4g,UID:25a6ac4b-5bd7-468a-888d-0b7fcc3d290e,APIVersion:v1,ResourceVersion:29622,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 3.484s (3.484s including waiting). Image size: 1166891762 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:35:01.063308066 +0000 UTC m=+247.037657151,LastTimestamp:2026-01-21 14:35:01.063308066 +0000 UTC m=+247.037657151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:35:01 crc kubenswrapper[4834]: W0121 14:35:01.087128 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-06c2bbbfe8a3d49f3717c2a54af6e522de25cd09fb5a3ad91ce54e136217050c WatchSource:0}: Error finding container 06c2bbbfe8a3d49f3717c2a54af6e522de25cd09fb5a3ad91ce54e136217050c: Status 404 returned error can't find the container with id 06c2bbbfe8a3d49f3717c2a54af6e522de25cd09fb5a3ad91ce54e136217050c Jan 21 14:35:01 crc kubenswrapper[4834]: I0121 14:35:01.611727 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"06c2bbbfe8a3d49f3717c2a54af6e522de25cd09fb5a3ad91ce54e136217050c"} Jan 21 14:35:01 crc kubenswrapper[4834]: I0121 14:35:01.868417 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:35:01 crc kubenswrapper[4834]: I0121 14:35:01.869466 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:01 crc kubenswrapper[4834]: I0121 14:35:01.870168 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.026398 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-kubelet-dir\") pod \"ff08c980-aca5-4de9-ad83-10c979bc28fb\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.026786 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff08c980-aca5-4de9-ad83-10c979bc28fb-kube-api-access\") pod \"ff08c980-aca5-4de9-ad83-10c979bc28fb\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.026822 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-var-lock\") pod \"ff08c980-aca5-4de9-ad83-10c979bc28fb\" (UID: \"ff08c980-aca5-4de9-ad83-10c979bc28fb\") " Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.026507 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff08c980-aca5-4de9-ad83-10c979bc28fb" (UID: "ff08c980-aca5-4de9-ad83-10c979bc28fb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.027114 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-var-lock" (OuterVolumeSpecName: "var-lock") pod "ff08c980-aca5-4de9-ad83-10c979bc28fb" (UID: "ff08c980-aca5-4de9-ad83-10c979bc28fb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.050737 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff08c980-aca5-4de9-ad83-10c979bc28fb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff08c980-aca5-4de9-ad83-10c979bc28fb" (UID: "ff08c980-aca5-4de9-ad83-10c979bc28fb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.128473 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff08c980-aca5-4de9-ad83-10c979bc28fb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.128777 4834 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.128788 4834 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff08c980-aca5-4de9-ad83-10c979bc28fb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.619731 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.620941 4834 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454" exitCode=0 Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.622423 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.622465 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.623808 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g4k" event={"ID":"81c693e3-bcc4-4d1f-80d4-cf7aed592bc7","Type":"ContainerStarted","Data":"fc445abce4b3a7ecf4b96ae35251edcbf0a3b1a15497c64b54a45e2a4d8f6e40"} Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.626208 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.626960 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.627273 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.627582 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.627580 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ff08c980-aca5-4de9-ad83-10c979bc28fb","Type":"ContainerDied","Data":"e21420faa37a60c79082c1628bead7499a355920ee52ba8cc923491336803b32"} Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.627687 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e21420faa37a60c79082c1628bead7499a355920ee52ba8cc923491336803b32" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.629701 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8"} Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.630248 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.630473 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.630653 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.630881 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.631094 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.631186 4834 generic.go:334] "Generic (PLEG): container finished" podID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerID="b5d2234e1282b9ce49d43a134d7b29d570b777c9cb0823b879a5b8374592c12d" exitCode=0 Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.631214 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qf4g" event={"ID":"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e","Type":"ContainerDied","Data":"b5d2234e1282b9ce49d43a134d7b29d570b777c9cb0823b879a5b8374592c12d"} Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.631314 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.631501 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.631652 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.631816 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.632018 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.681166 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.682090 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.682376 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.682534 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.682721 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: I0121 14:35:02.683035 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:02 crc kubenswrapper[4834]: E0121 14:35:02.685442 4834 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.45:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-2qf4g.188cc5ad23609322 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-2qf4g,UID:25a6ac4b-5bd7-468a-888d-0b7fcc3d290e,APIVersion:v1,ResourceVersion:29622,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 3.484s (3.484s including waiting). Image size: 1166891762 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:35:01.063308066 +0000 UTC m=+247.037657151,LastTimestamp:2026-01-21 14:35:01.063308066 +0000 UTC m=+247.037657151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.237085 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.238798 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.239447 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.239843 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.240062 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.240242 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.240404 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.240566 4834 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.346103 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.346154 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.346184 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.346217 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.346455 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.346556 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.346687 4834 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.346705 4834 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.346718 4834 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.640688 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qf4g" event={"ID":"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e","Type":"ContainerStarted","Data":"4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76"} Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.641437 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.642371 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.642722 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.643455 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.643699 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.643785 4834 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.644085 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.644873 4834 scope.go:117] "RemoveContainer" containerID="3c8997994245ddcd8e8bdd7ae83e3bc7efbd55ccdd98c1ccced487e3a5ea7c8a" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.645037 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.660574 4834 scope.go:117] "RemoveContainer" containerID="9800e8b029bd9e3996a80e87837823478daed4108b6fc1396cc096b7fd12d7e7" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.667138 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.668123 4834 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.668566 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.668907 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.669256 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.669740 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.676606 4834 scope.go:117] "RemoveContainer" containerID="44c11edddc0cd832e2eb2c7402a48f693c84c0b0005cdaa323d5e246b91619e7" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.694592 4834 scope.go:117] "RemoveContainer" containerID="832e631889686a27228609762f9275aca67f7fcc2d8e6684b3bffa3fc23bcde2" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.696486 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n2fjs" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.697550 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.698027 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.698369 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.698646 4834 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.698918 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.699325 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.708874 4834 scope.go:117] "RemoveContainer" containerID="25fd4169e2a6a7155b767f1ae01aa3494a3fb38c5fb1237dd44c63f300b63454" Jan 21 14:35:03 crc kubenswrapper[4834]: I0121 14:35:03.725204 4834 scope.go:117] "RemoveContainer" containerID="9977afd5b5d9ff89faf18cf0e20d7da4d95901d70f5622b90246baf35de9f189" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.327440 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.328079 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.328608 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.328832 4834 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.329038 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.329253 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.339429 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 14:35:04 crc kubenswrapper[4834]: E0121 14:35:04.351662 4834 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: E0121 14:35:04.352403 4834 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: E0121 14:35:04.352962 4834 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: E0121 14:35:04.353232 4834 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: E0121 14:35:04.353462 4834 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.353492 4834 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 14:35:04 crc kubenswrapper[4834]: E0121 14:35:04.353735 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="200ms" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.443165 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.445085 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.487383 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.488106 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.488523 4834 status_manager.go:851] "Failed to get status for pod" podUID="d3416206-7706-4072-af2f-e5bb2606aef0" pod="openshift-marketplace/redhat-marketplace-8bbsd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8bbsd\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.488914 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.489190 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.489410 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.489665 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: E0121 14:35:04.554962 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="400ms" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.699640 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8bbsd" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.700302 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.701003 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.701517 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.701820 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.702180 4834 status_manager.go:851] "Failed to get status for pod" podUID="d3416206-7706-4072-af2f-e5bb2606aef0" pod="openshift-marketplace/redhat-marketplace-8bbsd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8bbsd\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: I0121 14:35:04.702474 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:04 crc kubenswrapper[4834]: E0121 14:35:04.955900 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="800ms" Jan 21 14:35:05 crc kubenswrapper[4834]: I0121 14:35:05.065914 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:35:05 crc kubenswrapper[4834]: I0121 14:35:05.065998 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:35:05 crc kubenswrapper[4834]: E0121 14:35:05.757785 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="1.6s" Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.129986 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p7g4k" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" containerName="registry-server" probeResult="failure" output=< Jan 21 14:35:06 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 14:35:06 crc kubenswrapper[4834]: > Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.820165 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.820220 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.865126 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.865711 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.865921 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.866235 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.866703 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.866990 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:06 crc kubenswrapper[4834]: I0121 14:35:06.867326 4834 status_manager.go:851] "Failed to get status for pod" podUID="d3416206-7706-4072-af2f-e5bb2606aef0" pod="openshift-marketplace/redhat-marketplace-8bbsd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8bbsd\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:07 crc kubenswrapper[4834]: E0121 14:35:07.359852 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="3.2s" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.007498 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" podUID="2ab89550-989b-47f5-8877-aae1cb61fafd" containerName="oauth-openshift" containerID="cri-o://892fe8086803960c04c15f07066eb5aac6023ef8ddec9da772f711fddb452b1b" gracePeriod=15 Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.323997 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.325119 4834 status_manager.go:851] "Failed to get status for pod" podUID="d3416206-7706-4072-af2f-e5bb2606aef0" pod="openshift-marketplace/redhat-marketplace-8bbsd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8bbsd\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.325640 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.325878 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.326148 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.326594 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.327846 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.348433 4834 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.348478 4834 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:10 crc kubenswrapper[4834]: E0121 14:35:10.349090 4834 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.349882 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:10 crc kubenswrapper[4834]: W0121 14:35:10.371988 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-aa86e26e12bb1dbc01e00fe0df966428dd2602ef4247961a1e9e395eac32b3d0 WatchSource:0}: Error finding container aa86e26e12bb1dbc01e00fe0df966428dd2602ef4247961a1e9e395eac32b3d0: Status 404 returned error can't find the container with id aa86e26e12bb1dbc01e00fe0df966428dd2602ef4247961a1e9e395eac32b3d0 Jan 21 14:35:10 crc kubenswrapper[4834]: E0121 14:35:10.561949 4834 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.45:6443: connect: connection refused" interval="6.4s" Jan 21 14:35:10 crc kubenswrapper[4834]: I0121 14:35:10.683915 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aa86e26e12bb1dbc01e00fe0df966428dd2602ef4247961a1e9e395eac32b3d0"} Jan 21 14:35:11 crc kubenswrapper[4834]: I0121 14:35:11.690969 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c37487b2ce8c799d093bb8621631f1179b7265fbde1c6440842705ac52f0b94f"} Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.161401 4834 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zjflv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.161468 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" podUID="2ab89550-989b-47f5-8877-aae1cb61fafd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 21 14:35:12 crc kubenswrapper[4834]: E0121 14:35:12.687174 4834 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.45:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-2qf4g.188cc5ad23609322 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-2qf4g,UID:25a6ac4b-5bd7-468a-888d-0b7fcc3d290e,APIVersion:v1,ResourceVersion:29622,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 3.484s (3.484s including waiting). Image size: 1166891762 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 14:35:01.063308066 +0000 UTC m=+247.037657151,LastTimestamp:2026-01-21 14:35:01.063308066 +0000 UTC m=+247.037657151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.699894 4834 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c37487b2ce8c799d093bb8621631f1179b7265fbde1c6440842705ac52f0b94f" exitCode=0 Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.699986 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c37487b2ce8c799d093bb8621631f1179b7265fbde1c6440842705ac52f0b94f"} Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.700228 4834 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.700252 4834 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:12 crc kubenswrapper[4834]: E0121 14:35:12.700978 4834 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.701164 4834 status_manager.go:851] "Failed to get status for pod" podUID="d3416206-7706-4072-af2f-e5bb2606aef0" pod="openshift-marketplace/redhat-marketplace-8bbsd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8bbsd\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.702654 4834 status_manager.go:851] "Failed to get status for pod" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.702626 4834 generic.go:334] "Generic (PLEG): container finished" podID="2ab89550-989b-47f5-8877-aae1cb61fafd" containerID="892fe8086803960c04c15f07066eb5aac6023ef8ddec9da772f711fddb452b1b" exitCode=0 Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.702690 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" event={"ID":"2ab89550-989b-47f5-8877-aae1cb61fafd","Type":"ContainerDied","Data":"892fe8086803960c04c15f07066eb5aac6023ef8ddec9da772f711fddb452b1b"} Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.703161 4834 status_manager.go:851] "Failed to get status for pod" podUID="81c693e3-bcc4-4d1f-80d4-cf7aed592bc7" pod="openshift-marketplace/redhat-operators-p7g4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-p7g4k\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.703751 4834 status_manager.go:851] "Failed to get status for pod" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" pod="openshift-marketplace/certified-operators-2qf4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2qf4g\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.704436 4834 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:12 crc kubenswrapper[4834]: I0121 14:35:12.705166 4834 status_manager.go:851] "Failed to get status for pod" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" pod="openshift-marketplace/community-operators-n2fjs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n2fjs\": dial tcp 38.102.83.45:6443: connect: connection refused" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.421251 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509251 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-serving-cert\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509320 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-cliconfig\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509366 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-dir\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509389 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-idp-0-file-data\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509439 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-ocp-branding-template\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509509 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8fs6\" (UniqueName: \"kubernetes.io/projected/2ab89550-989b-47f5-8877-aae1cb61fafd-kube-api-access-x8fs6\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509534 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-error\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509557 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-session\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509579 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-login\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509603 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-service-ca\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509632 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-policies\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509663 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-provider-selection\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509694 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-trusted-ca-bundle\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.509725 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-router-certs\") pod \"2ab89550-989b-47f5-8877-aae1cb61fafd\" (UID: \"2ab89550-989b-47f5-8877-aae1cb61fafd\") " Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.511267 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.511542 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.512096 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.512464 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.512918 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.520111 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.520754 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.521111 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.521142 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.521380 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.521604 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.521687 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.522565 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.524122 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab89550-989b-47f5-8877-aae1cb61fafd-kube-api-access-x8fs6" (OuterVolumeSpecName: "kube-api-access-x8fs6") pod "2ab89550-989b-47f5-8877-aae1cb61fafd" (UID: "2ab89550-989b-47f5-8877-aae1cb61fafd"). InnerVolumeSpecName "kube-api-access-x8fs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611680 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611767 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611783 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8fs6\" (UniqueName: \"kubernetes.io/projected/2ab89550-989b-47f5-8877-aae1cb61fafd-kube-api-access-x8fs6\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611794 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611809 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611824 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611836 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611847 4834 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611861 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611878 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611889 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611900 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611912 4834 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2ab89550-989b-47f5-8877-aae1cb61fafd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.611939 4834 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2ab89550-989b-47f5-8877-aae1cb61fafd-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.714615 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b9df1b7e5edb1b158e0b4fafb9ff90e7c53e308d952fffd5373def0d0d8ae19f"} Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.714656 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e5f5b1f188bd177e0b704dc4be075d815266c30114cd400adc2b99ae377eb397"} Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.716856 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" event={"ID":"2ab89550-989b-47f5-8877-aae1cb61fafd","Type":"ContainerDied","Data":"f4b9e73085b053737010644f528461ff73d75f21ec272afe47855d0621e6d5ac"} Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.716891 4834 scope.go:117] "RemoveContainer" containerID="892fe8086803960c04c15f07066eb5aac6023ef8ddec9da772f711fddb452b1b" Jan 21 14:35:13 crc kubenswrapper[4834]: I0121 14:35:13.717024 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zjflv" Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.724992 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d047122d487485ce655c05d67d78c9e35d2edd7bfc26c21bd736f6359961b399"} Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.725034 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"53009f6c6d44f15c5a6e00f9b0355036cba9074673164e9c2c21d245b7511889"} Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.725045 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"219d18e0a32ae1f4dc1e3f00171996e00ce07a7f222668a2f128fa2cab9b992e"} Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.725287 4834 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.725302 4834 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.725486 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.728315 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.728489 4834 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871" exitCode=1 Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.728575 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871"} Jan 21 14:35:14 crc kubenswrapper[4834]: I0121 14:35:14.729254 4834 scope.go:117] "RemoveContainer" containerID="2994cb19057b0aab8eb2f561c4c91fc3b7e95de7f4b5c6383a521eaabbb50871" Jan 21 14:35:15 crc kubenswrapper[4834]: I0121 14:35:15.109103 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:35:15 crc kubenswrapper[4834]: I0121 14:35:15.145014 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7g4k" Jan 21 14:35:15 crc kubenswrapper[4834]: I0121 14:35:15.350909 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:15 crc kubenswrapper[4834]: I0121 14:35:15.350986 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:15 crc kubenswrapper[4834]: I0121 14:35:15.356646 4834 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]log ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]etcd ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/priority-and-fairness-filter ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-apiextensions-informers ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-apiextensions-controllers ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/crd-informer-synced ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-system-namespaces-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 21 14:35:15 crc kubenswrapper[4834]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 21 14:35:15 crc kubenswrapper[4834]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/bootstrap-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/start-kube-aggregator-informers ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/apiservice-registration-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/apiservice-discovery-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]autoregister-completion ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/apiservice-openapi-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 21 14:35:15 crc kubenswrapper[4834]: livez check failed Jan 21 14:35:15 crc kubenswrapper[4834]: I0121 14:35:15.356709 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 14:35:15 crc kubenswrapper[4834]: I0121 14:35:15.738696 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 14:35:15 crc kubenswrapper[4834]: I0121 14:35:15.739537 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"228c8de9db598a920c138ceab031be52e4f5453bae12b1c21a4729a48437e158"} Jan 21 14:35:16 crc kubenswrapper[4834]: I0121 14:35:16.883174 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 14:35:19 crc kubenswrapper[4834]: I0121 14:35:19.733719 4834 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.300907 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" podUID="98c36cc5-0276-4002-943b-030fb686cae6" containerName="registry" containerID="cri-o://cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df" gracePeriod=30 Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.355819 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.358210 4834 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="48841e4a-ecf7-44c2-922c-abb137c600be" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.676883 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.766942 4834 generic.go:334] "Generic (PLEG): container finished" podID="98c36cc5-0276-4002-943b-030fb686cae6" containerID="cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df" exitCode=0 Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.767250 4834 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.767262 4834 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.767458 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.767722 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" event={"ID":"98c36cc5-0276-4002-943b-030fb686cae6","Type":"ContainerDied","Data":"cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df"} Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.767745 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qg6w9" event={"ID":"98c36cc5-0276-4002-943b-030fb686cae6","Type":"ContainerDied","Data":"a4ac69525bf50bd16f62e53079f503f9bde2f6c96727c4496890010c1ca77f14"} Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.767763 4834 scope.go:117] "RemoveContainer" containerID="cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.770663 4834 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="48841e4a-ecf7-44c2-922c-abb137c600be" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.774266 4834 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://e5f5b1f188bd177e0b704dc4be075d815266c30114cd400adc2b99ae377eb397" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.774648 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.787225 4834 scope.go:117] "RemoveContainer" containerID="cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df" Jan 21 14:35:20 crc kubenswrapper[4834]: E0121 14:35:20.787746 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df\": container with ID starting with cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df not found: ID does not exist" containerID="cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.787790 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df"} err="failed to get container status \"cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df\": rpc error: code = NotFound desc = could not find container \"cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df\": container with ID starting with cce44683853c47d247958332328fcdb4cabc8a28f3803bb07e10a71b6b8ad3df not found: ID does not exist" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.807549 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98c36cc5-0276-4002-943b-030fb686cae6-installation-pull-secrets\") pod \"98c36cc5-0276-4002-943b-030fb686cae6\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.807598 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98c36cc5-0276-4002-943b-030fb686cae6-ca-trust-extracted\") pod \"98c36cc5-0276-4002-943b-030fb686cae6\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.807620 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-registry-tls\") pod \"98c36cc5-0276-4002-943b-030fb686cae6\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.807647 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-bound-sa-token\") pod \"98c36cc5-0276-4002-943b-030fb686cae6\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.807676 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-trusted-ca\") pod \"98c36cc5-0276-4002-943b-030fb686cae6\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.807716 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-registry-certificates\") pod \"98c36cc5-0276-4002-943b-030fb686cae6\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.807746 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmrml\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-kube-api-access-nmrml\") pod \"98c36cc5-0276-4002-943b-030fb686cae6\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.807947 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"98c36cc5-0276-4002-943b-030fb686cae6\" (UID: \"98c36cc5-0276-4002-943b-030fb686cae6\") " Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.816273 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "98c36cc5-0276-4002-943b-030fb686cae6" (UID: "98c36cc5-0276-4002-943b-030fb686cae6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.816865 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98c36cc5-0276-4002-943b-030fb686cae6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "98c36cc5-0276-4002-943b-030fb686cae6" (UID: "98c36cc5-0276-4002-943b-030fb686cae6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.818465 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-kube-api-access-nmrml" (OuterVolumeSpecName: "kube-api-access-nmrml") pod "98c36cc5-0276-4002-943b-030fb686cae6" (UID: "98c36cc5-0276-4002-943b-030fb686cae6"). InnerVolumeSpecName "kube-api-access-nmrml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.819670 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "98c36cc5-0276-4002-943b-030fb686cae6" (UID: "98c36cc5-0276-4002-943b-030fb686cae6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.819739 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "98c36cc5-0276-4002-943b-030fb686cae6" (UID: "98c36cc5-0276-4002-943b-030fb686cae6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.823093 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "98c36cc5-0276-4002-943b-030fb686cae6" (UID: "98c36cc5-0276-4002-943b-030fb686cae6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.824997 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "98c36cc5-0276-4002-943b-030fb686cae6" (UID: "98c36cc5-0276-4002-943b-030fb686cae6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.831324 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98c36cc5-0276-4002-943b-030fb686cae6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "98c36cc5-0276-4002-943b-030fb686cae6" (UID: "98c36cc5-0276-4002-943b-030fb686cae6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.909537 4834 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/98c36cc5-0276-4002-943b-030fb686cae6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.909579 4834 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/98c36cc5-0276-4002-943b-030fb686cae6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.909589 4834 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.909598 4834 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.909606 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.909614 4834 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/98c36cc5-0276-4002-943b-030fb686cae6-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:20 crc kubenswrapper[4834]: I0121 14:35:20.909623 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmrml\" (UniqueName: \"kubernetes.io/projected/98c36cc5-0276-4002-943b-030fb686cae6-kube-api-access-nmrml\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:21 crc kubenswrapper[4834]: I0121 14:35:21.204858 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:35:21 crc kubenswrapper[4834]: I0121 14:35:21.774533 4834 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:21 crc kubenswrapper[4834]: I0121 14:35:21.774562 4834 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:21 crc kubenswrapper[4834]: I0121 14:35:21.777560 4834 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="48841e4a-ecf7-44c2-922c-abb137c600be" Jan 21 14:35:22 crc kubenswrapper[4834]: I0121 14:35:22.780008 4834 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:22 crc kubenswrapper[4834]: I0121 14:35:22.780408 4834 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="449516f3-6736-49f1-b41e-cc3702440174" Jan 21 14:35:22 crc kubenswrapper[4834]: I0121 14:35:22.784003 4834 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="48841e4a-ecf7-44c2-922c-abb137c600be" Jan 21 14:35:24 crc kubenswrapper[4834]: I0121 14:35:24.133245 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:35:24 crc kubenswrapper[4834]: I0121 14:35:24.138043 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:35:24 crc kubenswrapper[4834]: I0121 14:35:24.807285 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 14:35:29 crc kubenswrapper[4834]: I0121 14:35:29.683969 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 14:35:29 crc kubenswrapper[4834]: I0121 14:35:29.945765 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 14:35:30 crc kubenswrapper[4834]: I0121 14:35:30.332210 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 14:35:30 crc kubenswrapper[4834]: I0121 14:35:30.487293 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 14:35:30 crc kubenswrapper[4834]: I0121 14:35:30.688727 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 14:35:30 crc kubenswrapper[4834]: I0121 14:35:30.753672 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 14:35:31 crc kubenswrapper[4834]: I0121 14:35:31.522737 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 14:35:31 crc kubenswrapper[4834]: I0121 14:35:31.710077 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 14:35:31 crc kubenswrapper[4834]: I0121 14:35:31.755637 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 14:35:31 crc kubenswrapper[4834]: I0121 14:35:31.763958 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 14:35:31 crc kubenswrapper[4834]: I0121 14:35:31.843443 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 14:35:31 crc kubenswrapper[4834]: I0121 14:35:31.995337 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 14:35:32 crc kubenswrapper[4834]: I0121 14:35:32.029482 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 14:35:32 crc kubenswrapper[4834]: I0121 14:35:32.215516 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 14:35:32 crc kubenswrapper[4834]: I0121 14:35:32.244739 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 14:35:32 crc kubenswrapper[4834]: I0121 14:35:32.307602 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 14:35:32 crc kubenswrapper[4834]: I0121 14:35:32.790555 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 14:35:32 crc kubenswrapper[4834]: I0121 14:35:32.794882 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 14:35:32 crc kubenswrapper[4834]: I0121 14:35:32.811423 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.033552 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.042211 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.125747 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.302004 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.317380 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.319093 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.455275 4834 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.456384 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.649302 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.687685 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 14:35:33 crc kubenswrapper[4834]: I0121 14:35:33.713754 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 14:35:34 crc kubenswrapper[4834]: I0121 14:35:34.094213 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 14:35:34 crc kubenswrapper[4834]: I0121 14:35:34.152430 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 14:35:34 crc kubenswrapper[4834]: I0121 14:35:34.257969 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 14:35:34 crc kubenswrapper[4834]: I0121 14:35:34.342110 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:35:34 crc kubenswrapper[4834]: I0121 14:35:34.436344 4834 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 14:35:34 crc kubenswrapper[4834]: I0121 14:35:34.726373 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 14:35:34 crc kubenswrapper[4834]: I0121 14:35:34.760196 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 14:35:34 crc kubenswrapper[4834]: I0121 14:35:34.915531 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.176202 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.196965 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.366956 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.381246 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.393267 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.428095 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.557585 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.568142 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.571707 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.637658 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.847662 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 14:35:35 crc kubenswrapper[4834]: I0121 14:35:35.975709 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.008127 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.063331 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.214801 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.241197 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.242488 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.303524 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.376970 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.449147 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.612239 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.831810 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.849171 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.872105 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.877880 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.954546 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 14:35:36 crc kubenswrapper[4834]: I0121 14:35:36.977249 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.046076 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.063297 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.092448 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.157039 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.164375 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.185502 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.196417 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.225078 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.244544 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.312298 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.352216 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.374064 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.391142 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.428349 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.444061 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.480655 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.522301 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.543850 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.547865 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.742691 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.851633 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.857260 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.879825 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.897453 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.904522 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 14:35:37 crc kubenswrapper[4834]: I0121 14:35:37.934199 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.197916 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.303853 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.306376 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.390412 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.417959 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.424526 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.486264 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.502710 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.524908 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.525907 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.529989 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.531739 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.574194 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.591667 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 14:35:38 crc kubenswrapper[4834]: I0121 14:35:38.915306 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.038854 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.139229 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.181978 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.194331 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.227317 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.314068 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.329873 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.334684 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.441842 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.539299 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.571158 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.678642 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.715334 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.719963 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.822398 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.876171 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.879655 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.895170 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.899031 4834 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.969898 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 14:35:39 crc kubenswrapper[4834]: I0121 14:35:39.976521 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.046126 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.084448 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.213988 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.224426 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.231514 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.282106 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.285456 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.319223 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.442642 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.551918 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.568675 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.666994 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.746745 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.781281 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 14:35:40 crc kubenswrapper[4834]: I0121 14:35:40.973975 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.121689 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.156971 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.167019 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.182142 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.292085 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.294076 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.319687 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.332846 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.357122 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.388918 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.570076 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.580670 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.666651 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.686027 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.694218 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.729175 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.759538 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.776882 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.839607 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 14:35:41 crc kubenswrapper[4834]: I0121 14:35:41.841315 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.118531 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.172270 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.194978 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.276509 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.321699 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.350185 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.399122 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.437446 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.574190 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.582018 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.613530 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.668142 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.883709 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.884893 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.928950 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.977398 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:35:42 crc kubenswrapper[4834]: I0121 14:35:42.978773 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.007912 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.012919 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.054251 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.071050 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.078607 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.102204 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.113906 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.168477 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.189600 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.239408 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.258555 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.330218 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.335288 4834 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.389210 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.397031 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.417810 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.522149 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.533841 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.598752 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.721590 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:35:43 crc kubenswrapper[4834]: I0121 14:35:43.967961 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.009764 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.033890 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.132314 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.226358 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.233183 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.272448 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.345484 4834 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.346678 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7g4k" podStartSLOduration=45.838932919 podStartE2EDuration="50.34666291s" podCreationTimestamp="2026-01-21 14:34:54 +0000 UTC" firstStartedPulling="2026-01-21 14:34:56.561440042 +0000 UTC m=+242.535789077" lastFinishedPulling="2026-01-21 14:35:01.069170023 +0000 UTC m=+247.043519068" observedRunningTime="2026-01-21 14:35:19.615491841 +0000 UTC m=+265.589840906" watchObservedRunningTime="2026-01-21 14:35:44.34666291 +0000 UTC m=+290.321011945" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.348111 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2qf4g" podStartSLOduration=42.629886826 podStartE2EDuration="48.348104665s" podCreationTimestamp="2026-01-21 14:34:56 +0000 UTC" firstStartedPulling="2026-01-21 14:34:57.578594619 +0000 UTC m=+243.552943664" lastFinishedPulling="2026-01-21 14:35:03.296812448 +0000 UTC m=+249.271161503" observedRunningTime="2026-01-21 14:35:19.629968734 +0000 UTC m=+265.604317779" watchObservedRunningTime="2026-01-21 14:35:44.348104665 +0000 UTC m=+290.322453710" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.348809 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.348803948 podStartE2EDuration="45.348803948s" podCreationTimestamp="2026-01-21 14:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:19.64142842 +0000 UTC m=+265.615777485" watchObservedRunningTime="2026-01-21 14:35:44.348803948 +0000 UTC m=+290.323152993" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.349901 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-image-registry/image-registry-697d97f7c8-qg6w9","openshift-authentication/oauth-openshift-558db77b4-zjflv"] Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.349965 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.354602 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.399096 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.487166 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.488473 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.718075 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 14:35:44 crc kubenswrapper[4834]: I0121 14:35:44.885557 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.058942 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.113480 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.245188 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.296215 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.300814 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.347262 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.405154 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.447111 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.481700 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.578859 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.690647 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.690622275 podStartE2EDuration="26.690622275s" podCreationTimestamp="2026-01-21 14:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:44.376508344 +0000 UTC m=+290.350857419" watchObservedRunningTime="2026-01-21 14:35:45.690622275 +0000 UTC m=+291.664971320" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.691706 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-86d854dc6b-4cptf"] Jan 21 14:35:45 crc kubenswrapper[4834]: E0121 14:35:45.692056 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" containerName="installer" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.692077 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" containerName="installer" Jan 21 14:35:45 crc kubenswrapper[4834]: E0121 14:35:45.692092 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c36cc5-0276-4002-943b-030fb686cae6" containerName="registry" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.692100 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c36cc5-0276-4002-943b-030fb686cae6" containerName="registry" Jan 21 14:35:45 crc kubenswrapper[4834]: E0121 14:35:45.692117 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab89550-989b-47f5-8877-aae1cb61fafd" containerName="oauth-openshift" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.692125 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab89550-989b-47f5-8877-aae1cb61fafd" containerName="oauth-openshift" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.692268 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab89550-989b-47f5-8877-aae1cb61fafd" containerName="oauth-openshift" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.692294 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff08c980-aca5-4de9-ad83-10c979bc28fb" containerName="installer" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.692312 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c36cc5-0276-4002-943b-030fb686cae6" containerName="registry" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.692997 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.697813 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.698050 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.698335 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.699124 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.699362 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.699416 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.700318 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.700429 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.700473 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.700528 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.700757 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.703117 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.708903 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86d854dc6b-4cptf"] Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.712819 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.715075 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.715752 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.726265 4834 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.813100 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-session\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.813355 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.813417 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.813439 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.813578 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.813643 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.813833 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-template-error\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.813877 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.813911 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr972\" (UniqueName: \"kubernetes.io/projected/daa6a320-2476-470a-b51b-54adad0654f6-kube-api-access-fr972\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.814083 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.814137 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.814174 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/daa6a320-2476-470a-b51b-54adad0654f6-audit-dir\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.814198 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-template-login\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.814273 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-audit-policies\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.881219 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.904972 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.915557 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.915597 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.915639 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-template-error\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.915661 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.915680 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr972\" (UniqueName: \"kubernetes.io/projected/daa6a320-2476-470a-b51b-54adad0654f6-kube-api-access-fr972\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.915706 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.915722 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.915740 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/daa6a320-2476-470a-b51b-54adad0654f6-audit-dir\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.915755 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-template-login\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.916090 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/daa6a320-2476-470a-b51b-54adad0654f6-audit-dir\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.916615 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.916692 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-audit-policies\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.916727 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-session\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.916814 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.916842 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.916881 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.917250 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-audit-policies\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.917416 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.917761 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.922096 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.922245 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-session\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.922257 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.923178 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.923413 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-template-login\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.924862 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.925542 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.925768 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/daa6a320-2476-470a-b51b-54adad0654f6-v4-0-config-user-template-error\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.933309 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr972\" (UniqueName: \"kubernetes.io/projected/daa6a320-2476-470a-b51b-54adad0654f6-kube-api-access-fr972\") pod \"oauth-openshift-86d854dc6b-4cptf\" (UID: \"daa6a320-2476-470a-b51b-54adad0654f6\") " pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:45 crc kubenswrapper[4834]: I0121 14:35:45.935453 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 14:35:46 crc kubenswrapper[4834]: I0121 14:35:46.032820 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:46 crc kubenswrapper[4834]: I0121 14:35:46.332608 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab89550-989b-47f5-8877-aae1cb61fafd" path="/var/lib/kubelet/pods/2ab89550-989b-47f5-8877-aae1cb61fafd/volumes" Jan 21 14:35:46 crc kubenswrapper[4834]: I0121 14:35:46.333644 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98c36cc5-0276-4002-943b-030fb686cae6" path="/var/lib/kubelet/pods/98c36cc5-0276-4002-943b-030fb686cae6/volumes" Jan 21 14:35:46 crc kubenswrapper[4834]: I0121 14:35:46.432485 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86d854dc6b-4cptf"] Jan 21 14:35:46 crc kubenswrapper[4834]: I0121 14:35:46.744194 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 14:35:46 crc kubenswrapper[4834]: I0121 14:35:46.934512 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" event={"ID":"daa6a320-2476-470a-b51b-54adad0654f6","Type":"ContainerStarted","Data":"c1cc504760ccbe6624288ef161c22bafc987bc4c570d9578173b4d576bc0d97a"} Jan 21 14:35:46 crc kubenswrapper[4834]: I0121 14:35:46.934558 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" event={"ID":"daa6a320-2476-470a-b51b-54adad0654f6","Type":"ContainerStarted","Data":"649ff784aed58e13a6490bb30ffac8b0a68f45187a44b6feff992bb703541448"} Jan 21 14:35:46 crc kubenswrapper[4834]: I0121 14:35:46.934797 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:46 crc kubenswrapper[4834]: I0121 14:35:46.956205 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" podStartSLOduration=62.956189081 podStartE2EDuration="1m2.956189081s" podCreationTimestamp="2026-01-21 14:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:35:46.952827093 +0000 UTC m=+292.927176148" watchObservedRunningTime="2026-01-21 14:35:46.956189081 +0000 UTC m=+292.930538126" Jan 21 14:35:47 crc kubenswrapper[4834]: I0121 14:35:47.040836 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-86d854dc6b-4cptf" Jan 21 14:35:47 crc kubenswrapper[4834]: I0121 14:35:47.129659 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 14:35:47 crc kubenswrapper[4834]: I0121 14:35:47.295858 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 14:35:47 crc kubenswrapper[4834]: I0121 14:35:47.323120 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 14:35:47 crc kubenswrapper[4834]: I0121 14:35:47.563573 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 14:35:47 crc kubenswrapper[4834]: I0121 14:35:47.752378 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 14:35:47 crc kubenswrapper[4834]: I0121 14:35:47.940580 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 14:35:48 crc kubenswrapper[4834]: I0121 14:35:48.238094 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:35:48 crc kubenswrapper[4834]: I0121 14:35:48.261523 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 14:35:48 crc kubenswrapper[4834]: I0121 14:35:48.468220 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 14:35:48 crc kubenswrapper[4834]: I0121 14:35:48.760149 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 14:35:53 crc kubenswrapper[4834]: I0121 14:35:53.606258 4834 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:35:53 crc kubenswrapper[4834]: I0121 14:35:53.606872 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8" gracePeriod=5 Jan 21 14:35:54 crc kubenswrapper[4834]: I0121 14:35:54.185067 4834 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.732150 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.732771 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.891880 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.891977 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892020 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892042 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892037 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892066 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892101 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892130 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892166 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892589 4834 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892611 4834 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892622 4834 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.892635 4834 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.903392 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.993576 4834 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.997785 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.997841 4834 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8" exitCode=137 Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.997883 4834 scope.go:117] "RemoveContainer" containerID="4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8" Jan 21 14:35:58 crc kubenswrapper[4834]: I0121 14:35:58.997983 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 14:35:59 crc kubenswrapper[4834]: I0121 14:35:59.013760 4834 scope.go:117] "RemoveContainer" containerID="4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8" Jan 21 14:35:59 crc kubenswrapper[4834]: E0121 14:35:59.014360 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8\": container with ID starting with 4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8 not found: ID does not exist" containerID="4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8" Jan 21 14:35:59 crc kubenswrapper[4834]: I0121 14:35:59.014392 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8"} err="failed to get container status \"4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8\": rpc error: code = NotFound desc = could not find container \"4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8\": container with ID starting with 4f81c101d71dcfdba40d0ecdb52df321df7b1719403e30d89b19edf146f775a8 not found: ID does not exist" Jan 21 14:36:00 crc kubenswrapper[4834]: I0121 14:36:00.332199 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 14:36:00 crc kubenswrapper[4834]: I0121 14:36:00.332494 4834 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 14:36:00 crc kubenswrapper[4834]: I0121 14:36:00.345829 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:36:00 crc kubenswrapper[4834]: I0121 14:36:00.345890 4834 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e291465f-d4d4-459f-823f-ff287b032b67" Jan 21 14:36:00 crc kubenswrapper[4834]: I0121 14:36:00.349353 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 14:36:00 crc kubenswrapper[4834]: I0121 14:36:00.349393 4834 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e291465f-d4d4-459f-823f-ff287b032b67" Jan 21 14:36:10 crc kubenswrapper[4834]: I0121 14:36:10.452127 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f4xlb"] Jan 21 14:36:10 crc kubenswrapper[4834]: I0121 14:36:10.452975 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" podUID="0094f50f-57ac-4cb5-a536-81bf5fc7ae90" containerName="controller-manager" containerID="cri-o://aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7" gracePeriod=30 Jan 21 14:36:10 crc kubenswrapper[4834]: I0121 14:36:10.566666 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272"] Jan 21 14:36:10 crc kubenswrapper[4834]: I0121 14:36:10.566900 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" podUID="5f689c0c-55d1-4533-8447-b934821c0b0b" containerName="route-controller-manager" containerID="cri-o://c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40" gracePeriod=30 Jan 21 14:36:10 crc kubenswrapper[4834]: I0121 14:36:10.905977 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:36:10 crc kubenswrapper[4834]: I0121 14:36:10.939513 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.045359 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg84h\" (UniqueName: \"kubernetes.io/projected/5f689c0c-55d1-4533-8447-b934821c0b0b-kube-api-access-sg84h\") pod \"5f689c0c-55d1-4533-8447-b934821c0b0b\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.045427 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-config\") pod \"5f689c0c-55d1-4533-8447-b934821c0b0b\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.045458 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-config\") pod \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.045496 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-serving-cert\") pod \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.045534 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-proxy-ca-bundles\") pod \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.045552 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-client-ca\") pod \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.045593 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f689c0c-55d1-4533-8447-b934821c0b0b-serving-cert\") pod \"5f689c0c-55d1-4533-8447-b934821c0b0b\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.045653 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-client-ca\") pod \"5f689c0c-55d1-4533-8447-b934821c0b0b\" (UID: \"5f689c0c-55d1-4533-8447-b934821c0b0b\") " Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.045697 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46g47\" (UniqueName: \"kubernetes.io/projected/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-kube-api-access-46g47\") pod \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\" (UID: \"0094f50f-57ac-4cb5-a536-81bf5fc7ae90\") " Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.046169 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-config" (OuterVolumeSpecName: "config") pod "5f689c0c-55d1-4533-8447-b934821c0b0b" (UID: "5f689c0c-55d1-4533-8447-b934821c0b0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.046774 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-client-ca" (OuterVolumeSpecName: "client-ca") pod "5f689c0c-55d1-4533-8447-b934821c0b0b" (UID: "5f689c0c-55d1-4533-8447-b934821c0b0b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.047005 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-client-ca" (OuterVolumeSpecName: "client-ca") pod "0094f50f-57ac-4cb5-a536-81bf5fc7ae90" (UID: "0094f50f-57ac-4cb5-a536-81bf5fc7ae90"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.047058 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0094f50f-57ac-4cb5-a536-81bf5fc7ae90" (UID: "0094f50f-57ac-4cb5-a536-81bf5fc7ae90"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.047037 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-config" (OuterVolumeSpecName: "config") pod "0094f50f-57ac-4cb5-a536-81bf5fc7ae90" (UID: "0094f50f-57ac-4cb5-a536-81bf5fc7ae90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.051553 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f689c0c-55d1-4533-8447-b934821c0b0b-kube-api-access-sg84h" (OuterVolumeSpecName: "kube-api-access-sg84h") pod "5f689c0c-55d1-4533-8447-b934821c0b0b" (UID: "5f689c0c-55d1-4533-8447-b934821c0b0b"). InnerVolumeSpecName "kube-api-access-sg84h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.051583 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f689c0c-55d1-4533-8447-b934821c0b0b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5f689c0c-55d1-4533-8447-b934821c0b0b" (UID: "5f689c0c-55d1-4533-8447-b934821c0b0b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.051633 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-kube-api-access-46g47" (OuterVolumeSpecName: "kube-api-access-46g47") pod "0094f50f-57ac-4cb5-a536-81bf5fc7ae90" (UID: "0094f50f-57ac-4cb5-a536-81bf5fc7ae90"). InnerVolumeSpecName "kube-api-access-46g47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.051696 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0094f50f-57ac-4cb5-a536-81bf5fc7ae90" (UID: "0094f50f-57ac-4cb5-a536-81bf5fc7ae90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.070686 4834 generic.go:334] "Generic (PLEG): container finished" podID="0094f50f-57ac-4cb5-a536-81bf5fc7ae90" containerID="aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7" exitCode=0 Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.070733 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.070761 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" event={"ID":"0094f50f-57ac-4cb5-a536-81bf5fc7ae90","Type":"ContainerDied","Data":"aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7"} Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.070867 4834 scope.go:117] "RemoveContainer" containerID="aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.070792 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-f4xlb" event={"ID":"0094f50f-57ac-4cb5-a536-81bf5fc7ae90","Type":"ContainerDied","Data":"bd512f43b04ad5332a64267215693ca10e7ae0c03d2c127d2b4220909867e2cd"} Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.078712 4834 generic.go:334] "Generic (PLEG): container finished" podID="5f689c0c-55d1-4533-8447-b934821c0b0b" containerID="c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40" exitCode=0 Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.078762 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" event={"ID":"5f689c0c-55d1-4533-8447-b934821c0b0b","Type":"ContainerDied","Data":"c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40"} Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.078787 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" event={"ID":"5f689c0c-55d1-4533-8447-b934821c0b0b","Type":"ContainerDied","Data":"540f64f3858ec5003e9c692f01144b222c02378c68b63a1e7a684e0d4096bee5"} Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.078838 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.096543 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f4xlb"] Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.100204 4834 scope.go:117] "RemoveContainer" containerID="aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7" Jan 21 14:36:11 crc kubenswrapper[4834]: E0121 14:36:11.100538 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7\": container with ID starting with aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7 not found: ID does not exist" containerID="aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.100576 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7"} err="failed to get container status \"aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7\": rpc error: code = NotFound desc = could not find container \"aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7\": container with ID starting with aade7706afb3c76c08228690e952e5d753a7e78d712764453cf720afa19845e7 not found: ID does not exist" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.100604 4834 scope.go:117] "RemoveContainer" containerID="c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.105121 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-f4xlb"] Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.112440 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272"] Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.116172 4834 scope.go:117] "RemoveContainer" containerID="c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.116692 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qv272"] Jan 21 14:36:11 crc kubenswrapper[4834]: E0121 14:36:11.116688 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40\": container with ID starting with c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40 not found: ID does not exist" containerID="c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.116779 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40"} err="failed to get container status \"c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40\": rpc error: code = NotFound desc = could not find container \"c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40\": container with ID starting with c3b7f305802a218278bd6ed8fa5fa660d9677e8378571108f7754221a605ab40 not found: ID does not exist" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.147600 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.147651 4834 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.147664 4834 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.147673 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f689c0c-55d1-4533-8447-b934821c0b0b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.147683 4834 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.147692 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46g47\" (UniqueName: \"kubernetes.io/projected/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-kube-api-access-46g47\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.147702 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg84h\" (UniqueName: \"kubernetes.io/projected/5f689c0c-55d1-4533-8447-b934821c0b0b-kube-api-access-sg84h\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.147711 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f689c0c-55d1-4533-8447-b934821c0b0b-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:11 crc kubenswrapper[4834]: I0121 14:36:11.147719 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0094f50f-57ac-4cb5-a536-81bf5fc7ae90-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.333156 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0094f50f-57ac-4cb5-a536-81bf5fc7ae90" path="/var/lib/kubelet/pods/0094f50f-57ac-4cb5-a536-81bf5fc7ae90/volumes" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.334515 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f689c0c-55d1-4533-8447-b934821c0b0b" path="/var/lib/kubelet/pods/5f689c0c-55d1-4533-8447-b934821c0b0b/volumes" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.697374 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g"] Jan 21 14:36:12 crc kubenswrapper[4834]: E0121 14:36:12.697612 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0094f50f-57ac-4cb5-a536-81bf5fc7ae90" containerName="controller-manager" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.697625 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0094f50f-57ac-4cb5-a536-81bf5fc7ae90" containerName="controller-manager" Jan 21 14:36:12 crc kubenswrapper[4834]: E0121 14:36:12.697640 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.697646 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 14:36:12 crc kubenswrapper[4834]: E0121 14:36:12.697657 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f689c0c-55d1-4533-8447-b934821c0b0b" containerName="route-controller-manager" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.697663 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f689c0c-55d1-4533-8447-b934821c0b0b" containerName="route-controller-manager" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.697783 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0094f50f-57ac-4cb5-a536-81bf5fc7ae90" containerName="controller-manager" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.697836 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f689c0c-55d1-4533-8447-b934821c0b0b" containerName="route-controller-manager" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.697851 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.698292 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.701284 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58"] Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.702028 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.702667 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.702860 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.703033 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.702880 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.703292 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.705660 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.706016 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.706040 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.706348 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.706634 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.706871 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.712720 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.719663 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g"] Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.731534 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.733156 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58"] Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.868671 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1965e7-dfbc-45f7-8666-5b02b29ff934-serving-cert\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.868756 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-config\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.868802 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-proxy-ca-bundles\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.868826 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-client-ca\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.868851 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-config\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.868868 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-client-ca\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.868892 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee68d2b-e77e-499e-a6c4-6a50db160908-serving-cert\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.868912 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wxp4\" (UniqueName: \"kubernetes.io/projected/4e1965e7-dfbc-45f7-8666-5b02b29ff934-kube-api-access-6wxp4\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.868943 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tnm9\" (UniqueName: \"kubernetes.io/projected/5ee68d2b-e77e-499e-a6c4-6a50db160908-kube-api-access-5tnm9\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.970526 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-config\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.970845 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-client-ca\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.970962 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wxp4\" (UniqueName: \"kubernetes.io/projected/4e1965e7-dfbc-45f7-8666-5b02b29ff934-kube-api-access-6wxp4\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.971049 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee68d2b-e77e-499e-a6c4-6a50db160908-serving-cert\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.971125 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tnm9\" (UniqueName: \"kubernetes.io/projected/5ee68d2b-e77e-499e-a6c4-6a50db160908-kube-api-access-5tnm9\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.971238 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1965e7-dfbc-45f7-8666-5b02b29ff934-serving-cert\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.971321 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-config\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.971405 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-proxy-ca-bundles\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.971471 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-client-ca\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.972763 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-client-ca\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.973648 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-config\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.975130 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-proxy-ca-bundles\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.975513 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-config\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.975648 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-client-ca\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.982763 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1965e7-dfbc-45f7-8666-5b02b29ff934-serving-cert\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.983769 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee68d2b-e77e-499e-a6c4-6a50db160908-serving-cert\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:12 crc kubenswrapper[4834]: I0121 14:36:12.992899 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wxp4\" (UniqueName: \"kubernetes.io/projected/4e1965e7-dfbc-45f7-8666-5b02b29ff934-kube-api-access-6wxp4\") pod \"route-controller-manager-656f6c855f-wpt58\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:13 crc kubenswrapper[4834]: I0121 14:36:13.001121 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tnm9\" (UniqueName: \"kubernetes.io/projected/5ee68d2b-e77e-499e-a6c4-6a50db160908-kube-api-access-5tnm9\") pod \"controller-manager-59c9d6bf67-7lr7g\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:13 crc kubenswrapper[4834]: I0121 14:36:13.031070 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:13 crc kubenswrapper[4834]: I0121 14:36:13.036693 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:13 crc kubenswrapper[4834]: I0121 14:36:13.243750 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58"] Jan 21 14:36:13 crc kubenswrapper[4834]: W0121 14:36:13.249918 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e1965e7_dfbc_45f7_8666_5b02b29ff934.slice/crio-752afd2d97832fe921b3baf87f3322c53a394807eb1b74adc8243c8d76f0d760 WatchSource:0}: Error finding container 752afd2d97832fe921b3baf87f3322c53a394807eb1b74adc8243c8d76f0d760: Status 404 returned error can't find the container with id 752afd2d97832fe921b3baf87f3322c53a394807eb1b74adc8243c8d76f0d760 Jan 21 14:36:13 crc kubenswrapper[4834]: I0121 14:36:13.296654 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g"] Jan 21 14:36:13 crc kubenswrapper[4834]: W0121 14:36:13.302000 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ee68d2b_e77e_499e_a6c4_6a50db160908.slice/crio-7288bd27cdc5b7a5ac740d6719d1633ee780f2aceb373c0707fbbcbcae91d6bf WatchSource:0}: Error finding container 7288bd27cdc5b7a5ac740d6719d1633ee780f2aceb373c0707fbbcbcae91d6bf: Status 404 returned error can't find the container with id 7288bd27cdc5b7a5ac740d6719d1633ee780f2aceb373c0707fbbcbcae91d6bf Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.179556 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" event={"ID":"4e1965e7-dfbc-45f7-8666-5b02b29ff934","Type":"ContainerStarted","Data":"00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b"} Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.179952 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" event={"ID":"4e1965e7-dfbc-45f7-8666-5b02b29ff934","Type":"ContainerStarted","Data":"752afd2d97832fe921b3baf87f3322c53a394807eb1b74adc8243c8d76f0d760"} Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.181279 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.183663 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" event={"ID":"5ee68d2b-e77e-499e-a6c4-6a50db160908","Type":"ContainerStarted","Data":"9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83"} Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.183697 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" event={"ID":"5ee68d2b-e77e-499e-a6c4-6a50db160908","Type":"ContainerStarted","Data":"7288bd27cdc5b7a5ac740d6719d1633ee780f2aceb373c0707fbbcbcae91d6bf"} Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.184575 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.189010 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.189351 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.221118 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" podStartSLOduration=4.221099155 podStartE2EDuration="4.221099155s" podCreationTimestamp="2026-01-21 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:14.19904928 +0000 UTC m=+320.173398325" watchObservedRunningTime="2026-01-21 14:36:14.221099155 +0000 UTC m=+320.195448200" Jan 21 14:36:14 crc kubenswrapper[4834]: I0121 14:36:14.241908 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" podStartSLOduration=4.241888829 podStartE2EDuration="4.241888829s" podCreationTimestamp="2026-01-21 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:36:14.239666809 +0000 UTC m=+320.214015864" watchObservedRunningTime="2026-01-21 14:36:14.241888829 +0000 UTC m=+320.216237874" Jan 21 14:36:47 crc kubenswrapper[4834]: I0121 14:36:47.113761 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:36:47 crc kubenswrapper[4834]: I0121 14:36:47.114595 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:37:10 crc kubenswrapper[4834]: I0121 14:37:10.448409 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g"] Jan 21 14:37:10 crc kubenswrapper[4834]: I0121 14:37:10.449749 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" podUID="5ee68d2b-e77e-499e-a6c4-6a50db160908" containerName="controller-manager" containerID="cri-o://9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83" gracePeriod=30 Jan 21 14:37:10 crc kubenswrapper[4834]: I0121 14:37:10.451898 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58"] Jan 21 14:37:10 crc kubenswrapper[4834]: I0121 14:37:10.452161 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" podUID="4e1965e7-dfbc-45f7-8666-5b02b29ff934" containerName="route-controller-manager" containerID="cri-o://00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b" gracePeriod=30 Jan 21 14:37:10 crc kubenswrapper[4834]: I0121 14:37:10.950417 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:37:10 crc kubenswrapper[4834]: I0121 14:37:10.956169 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.026226 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tnm9\" (UniqueName: \"kubernetes.io/projected/5ee68d2b-e77e-499e-a6c4-6a50db160908-kube-api-access-5tnm9\") pod \"5ee68d2b-e77e-499e-a6c4-6a50db160908\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.026286 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee68d2b-e77e-499e-a6c4-6a50db160908-serving-cert\") pod \"5ee68d2b-e77e-499e-a6c4-6a50db160908\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.026320 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1965e7-dfbc-45f7-8666-5b02b29ff934-serving-cert\") pod \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.026360 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-proxy-ca-bundles\") pod \"5ee68d2b-e77e-499e-a6c4-6a50db160908\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.026412 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-client-ca\") pod \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.026455 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-client-ca\") pod \"5ee68d2b-e77e-499e-a6c4-6a50db160908\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.026497 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-config\") pod \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.026528 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wxp4\" (UniqueName: \"kubernetes.io/projected/4e1965e7-dfbc-45f7-8666-5b02b29ff934-kube-api-access-6wxp4\") pod \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\" (UID: \"4e1965e7-dfbc-45f7-8666-5b02b29ff934\") " Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.026573 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-config\") pod \"5ee68d2b-e77e-499e-a6c4-6a50db160908\" (UID: \"5ee68d2b-e77e-499e-a6c4-6a50db160908\") " Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.027266 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5ee68d2b-e77e-499e-a6c4-6a50db160908" (UID: "5ee68d2b-e77e-499e-a6c4-6a50db160908"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.027317 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-config" (OuterVolumeSpecName: "config") pod "5ee68d2b-e77e-499e-a6c4-6a50db160908" (UID: "5ee68d2b-e77e-499e-a6c4-6a50db160908"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.027342 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-config" (OuterVolumeSpecName: "config") pod "4e1965e7-dfbc-45f7-8666-5b02b29ff934" (UID: "4e1965e7-dfbc-45f7-8666-5b02b29ff934"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.028012 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ee68d2b-e77e-499e-a6c4-6a50db160908" (UID: "5ee68d2b-e77e-499e-a6c4-6a50db160908"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.028540 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e1965e7-dfbc-45f7-8666-5b02b29ff934" (UID: "4e1965e7-dfbc-45f7-8666-5b02b29ff934"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.032143 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1965e7-dfbc-45f7-8666-5b02b29ff934-kube-api-access-6wxp4" (OuterVolumeSpecName: "kube-api-access-6wxp4") pod "4e1965e7-dfbc-45f7-8666-5b02b29ff934" (UID: "4e1965e7-dfbc-45f7-8666-5b02b29ff934"). InnerVolumeSpecName "kube-api-access-6wxp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.032186 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ee68d2b-e77e-499e-a6c4-6a50db160908-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ee68d2b-e77e-499e-a6c4-6a50db160908" (UID: "5ee68d2b-e77e-499e-a6c4-6a50db160908"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.032352 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee68d2b-e77e-499e-a6c4-6a50db160908-kube-api-access-5tnm9" (OuterVolumeSpecName: "kube-api-access-5tnm9") pod "5ee68d2b-e77e-499e-a6c4-6a50db160908" (UID: "5ee68d2b-e77e-499e-a6c4-6a50db160908"). InnerVolumeSpecName "kube-api-access-5tnm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.039071 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1965e7-dfbc-45f7-8666-5b02b29ff934-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e1965e7-dfbc-45f7-8666-5b02b29ff934" (UID: "4e1965e7-dfbc-45f7-8666-5b02b29ff934"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.127513 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.127572 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tnm9\" (UniqueName: \"kubernetes.io/projected/5ee68d2b-e77e-499e-a6c4-6a50db160908-kube-api-access-5tnm9\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.127584 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee68d2b-e77e-499e-a6c4-6a50db160908-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.127593 4834 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1965e7-dfbc-45f7-8666-5b02b29ff934-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.127605 4834 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.127613 4834 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.127622 4834 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ee68d2b-e77e-499e-a6c4-6a50db160908-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.127630 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1965e7-dfbc-45f7-8666-5b02b29ff934-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.127638 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wxp4\" (UniqueName: \"kubernetes.io/projected/4e1965e7-dfbc-45f7-8666-5b02b29ff934-kube-api-access-6wxp4\") on node \"crc\" DevicePath \"\"" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.515574 4834 generic.go:334] "Generic (PLEG): container finished" podID="4e1965e7-dfbc-45f7-8666-5b02b29ff934" containerID="00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b" exitCode=0 Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.515651 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" event={"ID":"4e1965e7-dfbc-45f7-8666-5b02b29ff934","Type":"ContainerDied","Data":"00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b"} Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.515689 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.515721 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58" event={"ID":"4e1965e7-dfbc-45f7-8666-5b02b29ff934","Type":"ContainerDied","Data":"752afd2d97832fe921b3baf87f3322c53a394807eb1b74adc8243c8d76f0d760"} Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.515747 4834 scope.go:117] "RemoveContainer" containerID="00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.518620 4834 generic.go:334] "Generic (PLEG): container finished" podID="5ee68d2b-e77e-499e-a6c4-6a50db160908" containerID="9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83" exitCode=0 Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.518666 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" event={"ID":"5ee68d2b-e77e-499e-a6c4-6a50db160908","Type":"ContainerDied","Data":"9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83"} Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.518681 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.518696 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g" event={"ID":"5ee68d2b-e77e-499e-a6c4-6a50db160908","Type":"ContainerDied","Data":"7288bd27cdc5b7a5ac740d6719d1633ee780f2aceb373c0707fbbcbcae91d6bf"} Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.540470 4834 scope.go:117] "RemoveContainer" containerID="00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b" Jan 21 14:37:11 crc kubenswrapper[4834]: E0121 14:37:11.541093 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b\": container with ID starting with 00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b not found: ID does not exist" containerID="00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.541162 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b"} err="failed to get container status \"00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b\": rpc error: code = NotFound desc = could not find container \"00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b\": container with ID starting with 00c5a748f22f3f1684aff16a19eb92e408e4d3bba0f39e4c2bf4a92a4ed6ed7b not found: ID does not exist" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.541202 4834 scope.go:117] "RemoveContainer" containerID="9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.557847 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g"] Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.560071 4834 scope.go:117] "RemoveContainer" containerID="9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83" Jan 21 14:37:11 crc kubenswrapper[4834]: E0121 14:37:11.560649 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83\": container with ID starting with 9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83 not found: ID does not exist" containerID="9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.560721 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83"} err="failed to get container status \"9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83\": rpc error: code = NotFound desc = could not find container \"9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83\": container with ID starting with 9fd95052944f009e77235c89a3a690f82d54c7e204589493659ebabdce5bfe83 not found: ID does not exist" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.568603 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59c9d6bf67-7lr7g"] Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.573142 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58"] Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.575909 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656f6c855f-wpt58"] Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.737760 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67d59ffb45-x7gzh"] Jan 21 14:37:11 crc kubenswrapper[4834]: E0121 14:37:11.738242 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee68d2b-e77e-499e-a6c4-6a50db160908" containerName="controller-manager" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.738264 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee68d2b-e77e-499e-a6c4-6a50db160908" containerName="controller-manager" Jan 21 14:37:11 crc kubenswrapper[4834]: E0121 14:37:11.738298 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1965e7-dfbc-45f7-8666-5b02b29ff934" containerName="route-controller-manager" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.738307 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1965e7-dfbc-45f7-8666-5b02b29ff934" containerName="route-controller-manager" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.738494 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee68d2b-e77e-499e-a6c4-6a50db160908" containerName="controller-manager" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.738540 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1965e7-dfbc-45f7-8666-5b02b29ff934" containerName="route-controller-manager" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.739251 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.742880 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl"] Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.742940 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.743730 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.743047 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.743597 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.744178 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.748557 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.748826 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.749350 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.749516 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.749618 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.749717 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.749829 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.749893 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.756641 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.759501 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67d59ffb45-x7gzh"] Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.766715 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl"] Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.836522 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk2hx\" (UniqueName: \"kubernetes.io/projected/1af9b8d6-f44c-4664-8703-6556205c0611-kube-api-access-rk2hx\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.836991 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af9b8d6-f44c-4664-8703-6556205c0611-serving-cert\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.837029 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b3d4e3-56ad-4b7e-af66-861001077d45-serving-cert\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.837048 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbdg\" (UniqueName: \"kubernetes.io/projected/d9b3d4e3-56ad-4b7e-af66-861001077d45-kube-api-access-ggbdg\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.837072 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1af9b8d6-f44c-4664-8703-6556205c0611-client-ca\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.837094 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af9b8d6-f44c-4664-8703-6556205c0611-config\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.837147 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1af9b8d6-f44c-4664-8703-6556205c0611-proxy-ca-bundles\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.837189 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b3d4e3-56ad-4b7e-af66-861001077d45-config\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.837210 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9b3d4e3-56ad-4b7e-af66-861001077d45-client-ca\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.938879 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af9b8d6-f44c-4664-8703-6556205c0611-serving-cert\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.939257 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b3d4e3-56ad-4b7e-af66-861001077d45-serving-cert\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.939383 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbdg\" (UniqueName: \"kubernetes.io/projected/d9b3d4e3-56ad-4b7e-af66-861001077d45-kube-api-access-ggbdg\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.939542 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1af9b8d6-f44c-4664-8703-6556205c0611-client-ca\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.939660 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af9b8d6-f44c-4664-8703-6556205c0611-config\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.939800 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1af9b8d6-f44c-4664-8703-6556205c0611-proxy-ca-bundles\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.940063 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b3d4e3-56ad-4b7e-af66-861001077d45-config\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.940192 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9b3d4e3-56ad-4b7e-af66-861001077d45-client-ca\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.940354 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk2hx\" (UniqueName: \"kubernetes.io/projected/1af9b8d6-f44c-4664-8703-6556205c0611-kube-api-access-rk2hx\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.940576 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1af9b8d6-f44c-4664-8703-6556205c0611-client-ca\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.940996 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1af9b8d6-f44c-4664-8703-6556205c0611-proxy-ca-bundles\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.941068 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9b3d4e3-56ad-4b7e-af66-861001077d45-client-ca\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.941137 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af9b8d6-f44c-4664-8703-6556205c0611-config\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.941263 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9b3d4e3-56ad-4b7e-af66-861001077d45-config\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.943958 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af9b8d6-f44c-4664-8703-6556205c0611-serving-cert\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.943982 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9b3d4e3-56ad-4b7e-af66-861001077d45-serving-cert\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.955487 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk2hx\" (UniqueName: \"kubernetes.io/projected/1af9b8d6-f44c-4664-8703-6556205c0611-kube-api-access-rk2hx\") pod \"controller-manager-67d59ffb45-x7gzh\" (UID: \"1af9b8d6-f44c-4664-8703-6556205c0611\") " pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:11 crc kubenswrapper[4834]: I0121 14:37:11.958763 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbdg\" (UniqueName: \"kubernetes.io/projected/d9b3d4e3-56ad-4b7e-af66-861001077d45-kube-api-access-ggbdg\") pod \"route-controller-manager-68f69d46f8-gmcpl\" (UID: \"d9b3d4e3-56ad-4b7e-af66-861001077d45\") " pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:12 crc kubenswrapper[4834]: I0121 14:37:12.079879 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:12 crc kubenswrapper[4834]: I0121 14:37:12.094642 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:12 crc kubenswrapper[4834]: I0121 14:37:12.332211 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1965e7-dfbc-45f7-8666-5b02b29ff934" path="/var/lib/kubelet/pods/4e1965e7-dfbc-45f7-8666-5b02b29ff934/volumes" Jan 21 14:37:12 crc kubenswrapper[4834]: I0121 14:37:12.333109 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee68d2b-e77e-499e-a6c4-6a50db160908" path="/var/lib/kubelet/pods/5ee68d2b-e77e-499e-a6c4-6a50db160908/volumes" Jan 21 14:37:12 crc kubenswrapper[4834]: I0121 14:37:12.486648 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67d59ffb45-x7gzh"] Jan 21 14:37:12 crc kubenswrapper[4834]: I0121 14:37:12.528414 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" event={"ID":"1af9b8d6-f44c-4664-8703-6556205c0611","Type":"ContainerStarted","Data":"3fe72e034191d0367f1a991d941065247365efed672f67d035155d45f99e6e81"} Jan 21 14:37:12 crc kubenswrapper[4834]: W0121 14:37:12.529981 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9b3d4e3_56ad_4b7e_af66_861001077d45.slice/crio-2ecdda8418dff6559dec685fe70a63051e7e2356d6f2fa2f9a087559358ab07d WatchSource:0}: Error finding container 2ecdda8418dff6559dec685fe70a63051e7e2356d6f2fa2f9a087559358ab07d: Status 404 returned error can't find the container with id 2ecdda8418dff6559dec685fe70a63051e7e2356d6f2fa2f9a087559358ab07d Jan 21 14:37:12 crc kubenswrapper[4834]: I0121 14:37:12.533081 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl"] Jan 21 14:37:13 crc kubenswrapper[4834]: I0121 14:37:13.535435 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" event={"ID":"d9b3d4e3-56ad-4b7e-af66-861001077d45","Type":"ContainerStarted","Data":"ed61d738ea152d2346d63f09d2412ea6159101856022276383bc0c2b56c8670e"} Jan 21 14:37:13 crc kubenswrapper[4834]: I0121 14:37:13.535843 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:13 crc kubenswrapper[4834]: I0121 14:37:13.535857 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" event={"ID":"d9b3d4e3-56ad-4b7e-af66-861001077d45","Type":"ContainerStarted","Data":"2ecdda8418dff6559dec685fe70a63051e7e2356d6f2fa2f9a087559358ab07d"} Jan 21 14:37:13 crc kubenswrapper[4834]: I0121 14:37:13.539365 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" event={"ID":"1af9b8d6-f44c-4664-8703-6556205c0611","Type":"ContainerStarted","Data":"3aa4adb3e1c8a1fb523469903e55f84c813b36c365fbf20b15f0e397b3487e5f"} Jan 21 14:37:13 crc kubenswrapper[4834]: I0121 14:37:13.539562 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:13 crc kubenswrapper[4834]: I0121 14:37:13.544232 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" Jan 21 14:37:13 crc kubenswrapper[4834]: I0121 14:37:13.557101 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" podStartSLOduration=3.557084288 podStartE2EDuration="3.557084288s" podCreationTimestamp="2026-01-21 14:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:13.555855249 +0000 UTC m=+379.530204294" watchObservedRunningTime="2026-01-21 14:37:13.557084288 +0000 UTC m=+379.531433333" Jan 21 14:37:13 crc kubenswrapper[4834]: I0121 14:37:13.570441 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68f69d46f8-gmcpl" Jan 21 14:37:13 crc kubenswrapper[4834]: I0121 14:37:13.591669 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67d59ffb45-x7gzh" podStartSLOduration=3.591653624 podStartE2EDuration="3.591653624s" podCreationTimestamp="2026-01-21 14:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:37:13.589281738 +0000 UTC m=+379.563630783" watchObservedRunningTime="2026-01-21 14:37:13.591653624 +0000 UTC m=+379.566002669" Jan 21 14:37:17 crc kubenswrapper[4834]: I0121 14:37:17.113852 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:37:17 crc kubenswrapper[4834]: I0121 14:37:17.114313 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:37:47 crc kubenswrapper[4834]: I0121 14:37:47.113911 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:37:47 crc kubenswrapper[4834]: I0121 14:37:47.114884 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:37:47 crc kubenswrapper[4834]: I0121 14:37:47.114958 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:37:47 crc kubenswrapper[4834]: I0121 14:37:47.115642 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9309683cbdb585cabd230b02b06a5fdc4f3c6d79bb872144234f51ca2d24f480"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:37:47 crc kubenswrapper[4834]: I0121 14:37:47.115720 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://9309683cbdb585cabd230b02b06a5fdc4f3c6d79bb872144234f51ca2d24f480" gracePeriod=600 Jan 21 14:37:47 crc kubenswrapper[4834]: I0121 14:37:47.727458 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="9309683cbdb585cabd230b02b06a5fdc4f3c6d79bb872144234f51ca2d24f480" exitCode=0 Jan 21 14:37:47 crc kubenswrapper[4834]: I0121 14:37:47.727572 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"9309683cbdb585cabd230b02b06a5fdc4f3c6d79bb872144234f51ca2d24f480"} Jan 21 14:37:47 crc kubenswrapper[4834]: I0121 14:37:47.727844 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"51804d2d0a0e2a55ee63963a3000b962787b6703fde6673f242f539ad62b2efb"} Jan 21 14:37:47 crc kubenswrapper[4834]: I0121 14:37:47.727870 4834 scope.go:117] "RemoveContainer" containerID="15c76b86952bd9bb540d2539c363debef6626f859024d8795643731b78829870" Jan 21 14:39:47 crc kubenswrapper[4834]: I0121 14:39:47.113954 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:39:47 crc kubenswrapper[4834]: I0121 14:39:47.114615 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:40:17 crc kubenswrapper[4834]: I0121 14:40:17.113897 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:40:17 crc kubenswrapper[4834]: I0121 14:40:17.114479 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:40:47 crc kubenswrapper[4834]: I0121 14:40:47.114622 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:40:47 crc kubenswrapper[4834]: I0121 14:40:47.115520 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:40:47 crc kubenswrapper[4834]: I0121 14:40:47.115604 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:40:47 crc kubenswrapper[4834]: I0121 14:40:47.116504 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51804d2d0a0e2a55ee63963a3000b962787b6703fde6673f242f539ad62b2efb"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:40:47 crc kubenswrapper[4834]: I0121 14:40:47.116607 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://51804d2d0a0e2a55ee63963a3000b962787b6703fde6673f242f539ad62b2efb" gracePeriod=600 Jan 21 14:40:47 crc kubenswrapper[4834]: I0121 14:40:47.798526 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="51804d2d0a0e2a55ee63963a3000b962787b6703fde6673f242f539ad62b2efb" exitCode=0 Jan 21 14:40:47 crc kubenswrapper[4834]: I0121 14:40:47.798595 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"51804d2d0a0e2a55ee63963a3000b962787b6703fde6673f242f539ad62b2efb"} Jan 21 14:40:47 crc kubenswrapper[4834]: I0121 14:40:47.799180 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"ceefd8d5119722787eeefef2e7cfbf9c09721033eaf242427e38e33ba55b1fc5"} Jan 21 14:40:47 crc kubenswrapper[4834]: I0121 14:40:47.799225 4834 scope.go:117] "RemoveContainer" containerID="9309683cbdb585cabd230b02b06a5fdc4f3c6d79bb872144234f51ca2d24f480" Jan 21 14:41:53 crc kubenswrapper[4834]: I0121 14:41:53.648183 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6qwpj"] Jan 21 14:41:53 crc kubenswrapper[4834]: I0121 14:41:53.649562 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovn-controller" containerID="cri-o://433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3" gracePeriod=30 Jan 21 14:41:53 crc kubenswrapper[4834]: I0121 14:41:53.649654 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd" gracePeriod=30 Jan 21 14:41:53 crc kubenswrapper[4834]: I0121 14:41:53.649674 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="nbdb" containerID="cri-o://c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404" gracePeriod=30 Jan 21 14:41:53 crc kubenswrapper[4834]: I0121 14:41:53.649773 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovn-acl-logging" containerID="cri-o://1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2" gracePeriod=30 Jan 21 14:41:53 crc kubenswrapper[4834]: I0121 14:41:53.649844 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kube-rbac-proxy-node" containerID="cri-o://b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca" gracePeriod=30 Jan 21 14:41:53 crc kubenswrapper[4834]: I0121 14:41:53.649707 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="northd" containerID="cri-o://a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2" gracePeriod=30 Jan 21 14:41:53 crc kubenswrapper[4834]: I0121 14:41:53.649959 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="sbdb" containerID="cri-o://b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3" gracePeriod=30 Jan 21 14:41:53 crc kubenswrapper[4834]: I0121 14:41:53.717143 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" containerID="cri-o://1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d" gracePeriod=30 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.060203 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/3.log" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.064706 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovn-acl-logging/0.log" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.065300 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovn-controller/0.log" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.065857 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135199 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nkqtm"] Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135467 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovn-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135485 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovn-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135495 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kube-rbac-proxy-node" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135503 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kube-rbac-proxy-node" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135513 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135522 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135529 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kubecfg-setup" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135535 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kubecfg-setup" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135544 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="nbdb" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135550 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="nbdb" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135557 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135563 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135572 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135578 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135584 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135592 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135599 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135605 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135613 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="northd" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135619 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="northd" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135629 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="sbdb" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135635 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="sbdb" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135643 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovn-acl-logging" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135652 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovn-acl-logging" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135741 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="nbdb" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135779 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="sbdb" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135786 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135795 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovn-acl-logging" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135812 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovn-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135821 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kube-rbac-proxy-node" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135828 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="northd" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135835 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135842 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135851 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135860 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.135958 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.135967 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.136058 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" containerName="ovnkube-controller" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.137800 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.163892 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-ovn\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.163984 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-netd\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164049 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-ovn-kubernetes\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164095 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-systemd-units\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164064 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164145 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164112 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164136 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-systemd\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164185 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164249 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-env-overrides\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164316 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-log-socket\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164334 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-openvswitch\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164351 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-bin\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164404 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-netns\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164421 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-var-lib-openvswitch\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164441 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-node-log\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164479 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmwbq\" (UniqueName: \"kubernetes.io/projected/0b3931d0-e57b-457f-94da-b56c92b40090-kube-api-access-pmwbq\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164498 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164539 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-etc-openvswitch\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164581 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-script-lib\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164607 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b3931d0-e57b-457f-94da-b56c92b40090-ovn-node-metrics-cert\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164628 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-config\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164650 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-kubelet\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.164666 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-slash\") pod \"0b3931d0-e57b-457f-94da-b56c92b40090\" (UID: \"0b3931d0-e57b-457f-94da-b56c92b40090\") " Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165109 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-node-log" (OuterVolumeSpecName: "node-log") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165132 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165203 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-slash" (OuterVolumeSpecName: "host-slash") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165175 4834 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165270 4834 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165273 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165282 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165334 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165302 4834 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165379 4834 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165379 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165419 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165416 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165411 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-log-socket" (OuterVolumeSpecName: "log-socket") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165837 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165850 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.165881 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.173579 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3931d0-e57b-457f-94da-b56c92b40090-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.175088 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3931d0-e57b-457f-94da-b56c92b40090-kube-api-access-pmwbq" (OuterVolumeSpecName: "kube-api-access-pmwbq") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "kube-api-access-pmwbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.184033 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0b3931d0-e57b-457f-94da-b56c92b40090" (UID: "0b3931d0-e57b-457f-94da-b56c92b40090"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.253235 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovnkube-controller/3.log" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.255747 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovn-acl-logging/0.log" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.256505 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6qwpj_0b3931d0-e57b-457f-94da-b56c92b40090/ovn-controller/0.log" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.256992 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d" exitCode=0 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257023 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3" exitCode=0 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257034 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404" exitCode=0 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257030 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257044 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2" exitCode=0 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257124 4834 scope.go:117] "RemoveContainer" containerID="1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257159 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd" exitCode=0 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257175 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca" exitCode=0 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257172 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257186 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2" exitCode=143 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257279 4834 generic.go:334] "Generic (PLEG): container finished" podID="0b3931d0-e57b-457f-94da-b56c92b40090" containerID="433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3" exitCode=143 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257079 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257415 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257432 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257445 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257457 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257468 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257482 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257490 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257496 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257503 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257510 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257516 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257522 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257535 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257544 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257554 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257563 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257570 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257577 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257585 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257591 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257597 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257604 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257610 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257616 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257625 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257633 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257641 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257648 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257655 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257661 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257668 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257674 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257681 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257691 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257697 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257706 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qwpj" event={"ID":"0b3931d0-e57b-457f-94da-b56c92b40090","Type":"ContainerDied","Data":"58fed71eb7c094fd87fa9ff6ea0db61cf1069f41da5a25249a34f29dae98f7dd"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257716 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257724 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257741 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257748 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257755 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257761 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257766 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257773 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257778 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.257784 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.261796 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/2.log" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.266985 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-cni-netd\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267031 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-run-ovn\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267053 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-node-log\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267070 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-run-openvswitch\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267120 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34c89f85-2b76-495b-a8ce-e118c31f08f1-ovnkube-script-lib\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267147 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqcp7\" (UniqueName: \"kubernetes.io/projected/34c89f85-2b76-495b-a8ce-e118c31f08f1-kube-api-access-lqcp7\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267196 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-run-netns\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267250 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-run-systemd\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267268 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34c89f85-2b76-495b-a8ce-e118c31f08f1-ovn-node-metrics-cert\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267286 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-kubelet\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267359 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-systemd-units\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267438 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-log-socket\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267471 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-var-lib-openvswitch\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267492 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34c89f85-2b76-495b-a8ce-e118c31f08f1-ovnkube-config\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267508 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-slash\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267541 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-etc-openvswitch\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267560 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-cni-bin\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267577 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34c89f85-2b76-495b-a8ce-e118c31f08f1-env-overrides\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267595 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-run-ovn-kubernetes\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267614 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267655 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/1.log" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267682 4834 generic.go:334] "Generic (PLEG): container finished" podID="dbe1b4f9-f835-43ba-9496-a9e60af3b87f" containerID="97e7484e5783d038480e79d49aa8e44f76b3324401232d77cab73d9076110755" exitCode=2 Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267700 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gd9jh" event={"ID":"dbe1b4f9-f835-43ba-9496-a9e60af3b87f","Type":"ContainerDied","Data":"97e7484e5783d038480e79d49aa8e44f76b3324401232d77cab73d9076110755"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267717 4834 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f"} Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267786 4834 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267805 4834 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267823 4834 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267836 4834 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267848 4834 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267859 4834 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267870 4834 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267881 4834 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267892 4834 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267915 4834 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267943 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmwbq\" (UniqueName: \"kubernetes.io/projected/0b3931d0-e57b-457f-94da-b56c92b40090-kube-api-access-pmwbq\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267958 4834 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267970 4834 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0b3931d0-e57b-457f-94da-b56c92b40090-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267983 4834 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.267996 4834 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b3931d0-e57b-457f-94da-b56c92b40090-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.268009 4834 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0b3931d0-e57b-457f-94da-b56c92b40090-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.268088 4834 scope.go:117] "RemoveContainer" containerID="97e7484e5783d038480e79d49aa8e44f76b3324401232d77cab73d9076110755" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.268425 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gd9jh_openshift-multus(dbe1b4f9-f835-43ba-9496-a9e60af3b87f)\"" pod="openshift-multus/multus-gd9jh" podUID="dbe1b4f9-f835-43ba-9496-a9e60af3b87f" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.280478 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.304403 4834 scope.go:117] "RemoveContainer" containerID="b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.312195 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6qwpj"] Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.318845 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6qwpj"] Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.326700 4834 scope.go:117] "RemoveContainer" containerID="c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.344041 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3931d0-e57b-457f-94da-b56c92b40090" path="/var/lib/kubelet/pods/0b3931d0-e57b-457f-94da-b56c92b40090/volumes" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.351648 4834 scope.go:117] "RemoveContainer" containerID="a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.367270 4834 scope.go:117] "RemoveContainer" containerID="c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368647 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-systemd-units\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368687 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-log-socket\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368714 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-var-lib-openvswitch\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368776 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34c89f85-2b76-495b-a8ce-e118c31f08f1-ovnkube-config\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368793 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-slash\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368821 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-etc-openvswitch\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368841 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-cni-bin\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368859 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-run-ovn-kubernetes\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368878 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34c89f85-2b76-495b-a8ce-e118c31f08f1-env-overrides\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368911 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368955 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-log-socket\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369027 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-cni-netd\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.368985 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-cni-netd\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369104 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-cni-bin\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369163 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-systemd-units\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369193 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-etc-openvswitch\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369219 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-run-ovn\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369178 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-run-ovn-kubernetes\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369268 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-node-log\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369279 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-slash\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369301 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-run-openvswitch\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369332 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-run-openvswitch\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369365 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-node-log\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369371 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-run-ovn\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369400 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34c89f85-2b76-495b-a8ce-e118c31f08f1-ovnkube-script-lib\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369468 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369556 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-var-lib-openvswitch\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369721 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqcp7\" (UniqueName: \"kubernetes.io/projected/34c89f85-2b76-495b-a8ce-e118c31f08f1-kube-api-access-lqcp7\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369753 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-run-netns\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369797 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-run-systemd\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369829 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34c89f85-2b76-495b-a8ce-e118c31f08f1-ovn-node-metrics-cert\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369849 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-kubelet\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369869 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-run-netns\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369959 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-run-systemd\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369985 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34c89f85-2b76-495b-a8ce-e118c31f08f1-env-overrides\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.369997 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34c89f85-2b76-495b-a8ce-e118c31f08f1-host-kubelet\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.370024 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34c89f85-2b76-495b-a8ce-e118c31f08f1-ovnkube-config\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.370591 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34c89f85-2b76-495b-a8ce-e118c31f08f1-ovnkube-script-lib\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.376677 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34c89f85-2b76-495b-a8ce-e118c31f08f1-ovn-node-metrics-cert\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.386755 4834 scope.go:117] "RemoveContainer" containerID="b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.391711 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqcp7\" (UniqueName: \"kubernetes.io/projected/34c89f85-2b76-495b-a8ce-e118c31f08f1-kube-api-access-lqcp7\") pod \"ovnkube-node-nkqtm\" (UID: \"34c89f85-2b76-495b-a8ce-e118c31f08f1\") " pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.400462 4834 scope.go:117] "RemoveContainer" containerID="1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.416250 4834 scope.go:117] "RemoveContainer" containerID="433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.431624 4834 scope.go:117] "RemoveContainer" containerID="7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.446304 4834 scope.go:117] "RemoveContainer" containerID="1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.446920 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": container with ID starting with 1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d not found: ID does not exist" containerID="1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.446991 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} err="failed to get container status \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": rpc error: code = NotFound desc = could not find container \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": container with ID starting with 1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.447027 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.447314 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\": container with ID starting with fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6 not found: ID does not exist" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.447346 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} err="failed to get container status \"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\": rpc error: code = NotFound desc = could not find container \"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\": container with ID starting with fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.447364 4834 scope.go:117] "RemoveContainer" containerID="b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.447678 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\": container with ID starting with b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3 not found: ID does not exist" containerID="b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.447707 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} err="failed to get container status \"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\": rpc error: code = NotFound desc = could not find container \"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\": container with ID starting with b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.447724 4834 scope.go:117] "RemoveContainer" containerID="c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.448025 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\": container with ID starting with c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404 not found: ID does not exist" containerID="c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.448052 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} err="failed to get container status \"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\": rpc error: code = NotFound desc = could not find container \"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\": container with ID starting with c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.448070 4834 scope.go:117] "RemoveContainer" containerID="a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.448257 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\": container with ID starting with a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2 not found: ID does not exist" containerID="a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.448285 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} err="failed to get container status \"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\": rpc error: code = NotFound desc = could not find container \"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\": container with ID starting with a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.448302 4834 scope.go:117] "RemoveContainer" containerID="c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.448532 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\": container with ID starting with c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd not found: ID does not exist" containerID="c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.448561 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} err="failed to get container status \"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\": rpc error: code = NotFound desc = could not find container \"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\": container with ID starting with c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.448613 4834 scope.go:117] "RemoveContainer" containerID="b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.448838 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\": container with ID starting with b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca not found: ID does not exist" containerID="b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.448865 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} err="failed to get container status \"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\": rpc error: code = NotFound desc = could not find container \"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\": container with ID starting with b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.448884 4834 scope.go:117] "RemoveContainer" containerID="1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.449267 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\": container with ID starting with 1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2 not found: ID does not exist" containerID="1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.449290 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} err="failed to get container status \"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\": rpc error: code = NotFound desc = could not find container \"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\": container with ID starting with 1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.449306 4834 scope.go:117] "RemoveContainer" containerID="433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.449546 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\": container with ID starting with 433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3 not found: ID does not exist" containerID="433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.449582 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} err="failed to get container status \"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\": rpc error: code = NotFound desc = could not find container \"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\": container with ID starting with 433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.449605 4834 scope.go:117] "RemoveContainer" containerID="7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9" Jan 21 14:41:54 crc kubenswrapper[4834]: E0121 14:41:54.449821 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\": container with ID starting with 7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9 not found: ID does not exist" containerID="7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.449842 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9"} err="failed to get container status \"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\": rpc error: code = NotFound desc = could not find container \"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\": container with ID starting with 7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.449857 4834 scope.go:117] "RemoveContainer" containerID="1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.450150 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} err="failed to get container status \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": rpc error: code = NotFound desc = could not find container \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": container with ID starting with 1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.450176 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.450387 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} err="failed to get container status \"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\": rpc error: code = NotFound desc = could not find container \"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\": container with ID starting with fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.450404 4834 scope.go:117] "RemoveContainer" containerID="b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.450598 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} err="failed to get container status \"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\": rpc error: code = NotFound desc = could not find container \"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\": container with ID starting with b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.450616 4834 scope.go:117] "RemoveContainer" containerID="c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.450802 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} err="failed to get container status \"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\": rpc error: code = NotFound desc = could not find container \"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\": container with ID starting with c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.450820 4834 scope.go:117] "RemoveContainer" containerID="a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451031 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} err="failed to get container status \"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\": rpc error: code = NotFound desc = could not find container \"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\": container with ID starting with a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451051 4834 scope.go:117] "RemoveContainer" containerID="c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451300 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} err="failed to get container status \"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\": rpc error: code = NotFound desc = could not find container \"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\": container with ID starting with c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451320 4834 scope.go:117] "RemoveContainer" containerID="b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451498 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} err="failed to get container status \"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\": rpc error: code = NotFound desc = could not find container \"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\": container with ID starting with b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451518 4834 scope.go:117] "RemoveContainer" containerID="1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451676 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} err="failed to get container status \"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\": rpc error: code = NotFound desc = could not find container \"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\": container with ID starting with 1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451693 4834 scope.go:117] "RemoveContainer" containerID="433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451856 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} err="failed to get container status \"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\": rpc error: code = NotFound desc = could not find container \"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\": container with ID starting with 433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.451879 4834 scope.go:117] "RemoveContainer" containerID="7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452054 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9"} err="failed to get container status \"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\": rpc error: code = NotFound desc = could not find container \"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\": container with ID starting with 7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452073 4834 scope.go:117] "RemoveContainer" containerID="1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452239 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} err="failed to get container status \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": rpc error: code = NotFound desc = could not find container \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": container with ID starting with 1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452257 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452472 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} err="failed to get container status \"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\": rpc error: code = NotFound desc = could not find container \"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\": container with ID starting with fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452490 4834 scope.go:117] "RemoveContainer" containerID="b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452668 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} err="failed to get container status \"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\": rpc error: code = NotFound desc = could not find container \"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\": container with ID starting with b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452686 4834 scope.go:117] "RemoveContainer" containerID="c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452872 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} err="failed to get container status \"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\": rpc error: code = NotFound desc = could not find container \"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\": container with ID starting with c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.452891 4834 scope.go:117] "RemoveContainer" containerID="a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.453372 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} err="failed to get container status \"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\": rpc error: code = NotFound desc = could not find container \"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\": container with ID starting with a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.453416 4834 scope.go:117] "RemoveContainer" containerID="c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.453618 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} err="failed to get container status \"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\": rpc error: code = NotFound desc = could not find container \"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\": container with ID starting with c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.453639 4834 scope.go:117] "RemoveContainer" containerID="b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.453805 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} err="failed to get container status \"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\": rpc error: code = NotFound desc = could not find container \"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\": container with ID starting with b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.453823 4834 scope.go:117] "RemoveContainer" containerID="1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454016 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} err="failed to get container status \"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\": rpc error: code = NotFound desc = could not find container \"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\": container with ID starting with 1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454036 4834 scope.go:117] "RemoveContainer" containerID="433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454193 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} err="failed to get container status \"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\": rpc error: code = NotFound desc = could not find container \"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\": container with ID starting with 433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454212 4834 scope.go:117] "RemoveContainer" containerID="7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454380 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9"} err="failed to get container status \"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\": rpc error: code = NotFound desc = could not find container \"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\": container with ID starting with 7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454398 4834 scope.go:117] "RemoveContainer" containerID="1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454553 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} err="failed to get container status \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": rpc error: code = NotFound desc = could not find container \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": container with ID starting with 1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454571 4834 scope.go:117] "RemoveContainer" containerID="fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454734 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6"} err="failed to get container status \"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\": rpc error: code = NotFound desc = could not find container \"fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6\": container with ID starting with fcd667cdaa6f0ea64824c4af194b116939b8bc38a6cea4203a1262f6c592f1e6 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.454752 4834 scope.go:117] "RemoveContainer" containerID="b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.455192 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.455769 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3"} err="failed to get container status \"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\": rpc error: code = NotFound desc = could not find container \"b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3\": container with ID starting with b5c3f42a737a03a1508431a6d8fefcc55cb11cb1479cf37a045534e7bbd1abc3 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.456125 4834 scope.go:117] "RemoveContainer" containerID="c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.464368 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404"} err="failed to get container status \"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\": rpc error: code = NotFound desc = could not find container \"c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404\": container with ID starting with c3f9ee2f0e4c9cc37ef1052575e79b1ae60ea32d1fbd9e3ffff7be0376bd3404 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.464420 4834 scope.go:117] "RemoveContainer" containerID="a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.465083 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2"} err="failed to get container status \"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\": rpc error: code = NotFound desc = could not find container \"a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2\": container with ID starting with a37b436528c29f85de25c861a3b65b4da952459f99fd18240c0a81daf8458dd2 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.465125 4834 scope.go:117] "RemoveContainer" containerID="c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.465478 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd"} err="failed to get container status \"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\": rpc error: code = NotFound desc = could not find container \"c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd\": container with ID starting with c5a45ec0299b08fb0c82ba817402484b75c80dbe3c59a0817a497c07d8b39bcd not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.465501 4834 scope.go:117] "RemoveContainer" containerID="b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.465858 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca"} err="failed to get container status \"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\": rpc error: code = NotFound desc = could not find container \"b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca\": container with ID starting with b5ece1b438155597d9bed39d71b0e1a830962b020987a9d4b4a0b017bd62b0ca not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.466005 4834 scope.go:117] "RemoveContainer" containerID="1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.466399 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2"} err="failed to get container status \"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\": rpc error: code = NotFound desc = could not find container \"1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2\": container with ID starting with 1af3343a7da043e65bf73b2e1f0d1dc63fc9e58f95602cf53ae2f426ab0cdeb2 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.466419 4834 scope.go:117] "RemoveContainer" containerID="433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.466707 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3"} err="failed to get container status \"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\": rpc error: code = NotFound desc = could not find container \"433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3\": container with ID starting with 433ff9e553ce13be17b94b6cb9d20523b18e33c7961ab5546ac06045a35680e3 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.466833 4834 scope.go:117] "RemoveContainer" containerID="7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.467223 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9"} err="failed to get container status \"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\": rpc error: code = NotFound desc = could not find container \"7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9\": container with ID starting with 7e5c936e8033b46a864fdb5d32adaafa50e2b078757c91241a377505b8f71ae9 not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.467249 4834 scope.go:117] "RemoveContainer" containerID="1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.467535 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d"} err="failed to get container status \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": rpc error: code = NotFound desc = could not find container \"1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d\": container with ID starting with 1dee865fd95238480d3bd4c55caff5308eb3d18754ccac21274fcd3f77c9585d not found: ID does not exist" Jan 21 14:41:54 crc kubenswrapper[4834]: I0121 14:41:54.575201 4834 scope.go:117] "RemoveContainer" containerID="17b4463fc47904fecd4c8df427f89e6b9a009261c4636d2af22f3920c883ef8f" Jan 21 14:41:55 crc kubenswrapper[4834]: I0121 14:41:55.274766 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerStarted","Data":"972fa701b49c9a8ee6a000df965ec226ac9fe42d32ac6d056f352502164476de"} Jan 21 14:41:56 crc kubenswrapper[4834]: I0121 14:41:56.285893 4834 generic.go:334] "Generic (PLEG): container finished" podID="34c89f85-2b76-495b-a8ce-e118c31f08f1" containerID="29baa435aa55d15548e6ce11a41c805e14ea3e577c0be6c80327ace8b6ce1eec" exitCode=0 Jan 21 14:41:56 crc kubenswrapper[4834]: I0121 14:41:56.286009 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerDied","Data":"29baa435aa55d15548e6ce11a41c805e14ea3e577c0be6c80327ace8b6ce1eec"} Jan 21 14:41:56 crc kubenswrapper[4834]: I0121 14:41:56.287822 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/2.log" Jan 21 14:41:57 crc kubenswrapper[4834]: I0121 14:41:57.302221 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerStarted","Data":"2eafcad87fafb6627e541bca5de7bf566dffe8902eba90261d6fbb9cb36d3c70"} Jan 21 14:41:57 crc kubenswrapper[4834]: I0121 14:41:57.303125 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerStarted","Data":"18909baa88c268c0c0996e1ff4a128ba4c546e210490c44e1fd84f09c144210d"} Jan 21 14:41:57 crc kubenswrapper[4834]: I0121 14:41:57.303152 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerStarted","Data":"a07e8d2d73ed6fb933913557bc4c0f8aa69388160dc4b3454bad7bf8c3af1565"} Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.310303 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerStarted","Data":"7f022d8e9cafff4eb0f01edda07119997064528fe53e5ac1290f44ebb802fcf1"} Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.310379 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerStarted","Data":"bc681d1410bff8097a8d241c84589f949ffd374a5d31e0e1d53c274b5c7a7cbe"} Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.497046 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-nfdv4"] Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.497969 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.500285 4834 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-j5dvb" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.500394 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.500509 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.500691 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.531653 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x74pn\" (UniqueName: \"kubernetes.io/projected/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-kube-api-access-x74pn\") pod \"crc-storage-crc-nfdv4\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.531713 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-crc-storage\") pod \"crc-storage-crc-nfdv4\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.531781 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-node-mnt\") pod \"crc-storage-crc-nfdv4\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.633191 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-node-mnt\") pod \"crc-storage-crc-nfdv4\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.633346 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x74pn\" (UniqueName: \"kubernetes.io/projected/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-kube-api-access-x74pn\") pod \"crc-storage-crc-nfdv4\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.633380 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-crc-storage\") pod \"crc-storage-crc-nfdv4\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.633653 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-node-mnt\") pod \"crc-storage-crc-nfdv4\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.634506 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-crc-storage\") pod \"crc-storage-crc-nfdv4\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.656575 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x74pn\" (UniqueName: \"kubernetes.io/projected/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-kube-api-access-x74pn\") pod \"crc-storage-crc-nfdv4\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: I0121 14:41:58.818705 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: E0121 14:41:58.843983 4834 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(1719c9f26d661efa578679a5f80b07920705ee77a576ac9f6dbcb4d1c83a0255): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:41:58 crc kubenswrapper[4834]: E0121 14:41:58.844141 4834 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(1719c9f26d661efa578679a5f80b07920705ee77a576ac9f6dbcb4d1c83a0255): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: E0121 14:41:58.844190 4834 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(1719c9f26d661efa578679a5f80b07920705ee77a576ac9f6dbcb4d1c83a0255): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:41:58 crc kubenswrapper[4834]: E0121 14:41:58.844278 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-nfdv4_crc-storage(f7c59b7c-e36a-44b3-a34a-16939ae1ccb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-nfdv4_crc-storage(f7c59b7c-e36a-44b3-a34a-16939ae1ccb9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(1719c9f26d661efa578679a5f80b07920705ee77a576ac9f6dbcb4d1c83a0255): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-nfdv4" podUID="f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" Jan 21 14:42:00 crc kubenswrapper[4834]: I0121 14:42:00.340454 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerStarted","Data":"7e25379fd99e107c18003f42bb986fdc6bb84f07fcaaab3950476fac09871653"} Jan 21 14:42:03 crc kubenswrapper[4834]: I0121 14:42:03.356961 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerStarted","Data":"ee4d302eaf809cb28bf77aea989283099407242a7ba77e79d819ef9f59d98974"} Jan 21 14:42:06 crc kubenswrapper[4834]: I0121 14:42:06.326422 4834 scope.go:117] "RemoveContainer" containerID="97e7484e5783d038480e79d49aa8e44f76b3324401232d77cab73d9076110755" Jan 21 14:42:06 crc kubenswrapper[4834]: E0121 14:42:06.327636 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gd9jh_openshift-multus(dbe1b4f9-f835-43ba-9496-a9e60af3b87f)\"" pod="openshift-multus/multus-gd9jh" podUID="dbe1b4f9-f835-43ba-9496-a9e60af3b87f" Jan 21 14:42:06 crc kubenswrapper[4834]: I0121 14:42:06.389461 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" event={"ID":"34c89f85-2b76-495b-a8ce-e118c31f08f1","Type":"ContainerStarted","Data":"97c64fb355f91bc9d035dda07b5882e81908b8237e64ec87aae6a48053a8707b"} Jan 21 14:42:06 crc kubenswrapper[4834]: I0121 14:42:06.390446 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:42:06 crc kubenswrapper[4834]: I0121 14:42:06.390518 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:42:06 crc kubenswrapper[4834]: I0121 14:42:06.424293 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:42:06 crc kubenswrapper[4834]: I0121 14:42:06.454440 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" podStartSLOduration=12.454416715 podStartE2EDuration="12.454416715s" podCreationTimestamp="2026-01-21 14:41:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:42:06.444278308 +0000 UTC m=+672.418627353" watchObservedRunningTime="2026-01-21 14:42:06.454416715 +0000 UTC m=+672.428765760" Jan 21 14:42:06 crc kubenswrapper[4834]: I0121 14:42:06.687868 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nfdv4"] Jan 21 14:42:06 crc kubenswrapper[4834]: I0121 14:42:06.688140 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:06 crc kubenswrapper[4834]: I0121 14:42:06.688691 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:06 crc kubenswrapper[4834]: E0121 14:42:06.717045 4834 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(8c39180a4a8682f79d01c2339b9033b7d66e1d3289fda30d4240394263db78ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:42:06 crc kubenswrapper[4834]: E0121 14:42:06.717139 4834 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(8c39180a4a8682f79d01c2339b9033b7d66e1d3289fda30d4240394263db78ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:06 crc kubenswrapper[4834]: E0121 14:42:06.717165 4834 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(8c39180a4a8682f79d01c2339b9033b7d66e1d3289fda30d4240394263db78ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:06 crc kubenswrapper[4834]: E0121 14:42:06.717223 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-nfdv4_crc-storage(f7c59b7c-e36a-44b3-a34a-16939ae1ccb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-nfdv4_crc-storage(f7c59b7c-e36a-44b3-a34a-16939ae1ccb9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(8c39180a4a8682f79d01c2339b9033b7d66e1d3289fda30d4240394263db78ab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-nfdv4" podUID="f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" Jan 21 14:42:07 crc kubenswrapper[4834]: I0121 14:42:07.395708 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:42:07 crc kubenswrapper[4834]: I0121 14:42:07.437365 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:42:19 crc kubenswrapper[4834]: I0121 14:42:19.324612 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:19 crc kubenswrapper[4834]: I0121 14:42:19.325949 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:19 crc kubenswrapper[4834]: E0121 14:42:19.361309 4834 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(b14e34edf8f68d22ac5f02fbae3861adfaf281af13dea550f84f68f8fd7d42e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 14:42:19 crc kubenswrapper[4834]: E0121 14:42:19.361995 4834 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(b14e34edf8f68d22ac5f02fbae3861adfaf281af13dea550f84f68f8fd7d42e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:19 crc kubenswrapper[4834]: E0121 14:42:19.362033 4834 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(b14e34edf8f68d22ac5f02fbae3861adfaf281af13dea550f84f68f8fd7d42e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:19 crc kubenswrapper[4834]: E0121 14:42:19.362120 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-nfdv4_crc-storage(f7c59b7c-e36a-44b3-a34a-16939ae1ccb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-nfdv4_crc-storage(f7c59b7c-e36a-44b3-a34a-16939ae1ccb9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nfdv4_crc-storage_f7c59b7c-e36a-44b3-a34a-16939ae1ccb9_0(b14e34edf8f68d22ac5f02fbae3861adfaf281af13dea550f84f68f8fd7d42e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-nfdv4" podUID="f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" Jan 21 14:42:20 crc kubenswrapper[4834]: I0121 14:42:20.325327 4834 scope.go:117] "RemoveContainer" containerID="97e7484e5783d038480e79d49aa8e44f76b3324401232d77cab73d9076110755" Jan 21 14:42:21 crc kubenswrapper[4834]: I0121 14:42:21.496624 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/2.log" Jan 21 14:42:21 crc kubenswrapper[4834]: I0121 14:42:21.497155 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gd9jh" event={"ID":"dbe1b4f9-f835-43ba-9496-a9e60af3b87f","Type":"ContainerStarted","Data":"1d0fb9e94352b096a86bf8e970a2f53db86c1b00bcb29e56717cf2f24344843c"} Jan 21 14:42:24 crc kubenswrapper[4834]: I0121 14:42:24.492497 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nkqtm" Jan 21 14:42:30 crc kubenswrapper[4834]: I0121 14:42:30.324418 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:30 crc kubenswrapper[4834]: I0121 14:42:30.325638 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:30 crc kubenswrapper[4834]: I0121 14:42:30.621369 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nfdv4"] Jan 21 14:42:30 crc kubenswrapper[4834]: I0121 14:42:30.631496 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:42:31 crc kubenswrapper[4834]: I0121 14:42:31.566572 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nfdv4" event={"ID":"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9","Type":"ContainerStarted","Data":"9ed31be39975a2a0f2a72b91f7ece40586decd38aa556b916c78be9d1158e967"} Jan 21 14:42:37 crc kubenswrapper[4834]: I0121 14:42:37.613198 4834 generic.go:334] "Generic (PLEG): container finished" podID="f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" containerID="4a1dc79cb8a892c1c45b57bc320caefdcc8e50182e67033f286fa23bfd71dcd2" exitCode=0 Jan 21 14:42:37 crc kubenswrapper[4834]: I0121 14:42:37.613274 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nfdv4" event={"ID":"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9","Type":"ContainerDied","Data":"4a1dc79cb8a892c1c45b57bc320caefdcc8e50182e67033f286fa23bfd71dcd2"} Jan 21 14:42:38 crc kubenswrapper[4834]: I0121 14:42:38.873906 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:38 crc kubenswrapper[4834]: I0121 14:42:38.929562 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x74pn\" (UniqueName: \"kubernetes.io/projected/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-kube-api-access-x74pn\") pod \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " Jan 21 14:42:38 crc kubenswrapper[4834]: I0121 14:42:38.929687 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-node-mnt\") pod \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " Jan 21 14:42:38 crc kubenswrapper[4834]: I0121 14:42:38.929806 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-crc-storage\") pod \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\" (UID: \"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9\") " Jan 21 14:42:38 crc kubenswrapper[4834]: I0121 14:42:38.929805 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" (UID: "f7c59b7c-e36a-44b3-a34a-16939ae1ccb9"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:42:38 crc kubenswrapper[4834]: I0121 14:42:38.930135 4834 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 14:42:38 crc kubenswrapper[4834]: I0121 14:42:38.939155 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-kube-api-access-x74pn" (OuterVolumeSpecName: "kube-api-access-x74pn") pod "f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" (UID: "f7c59b7c-e36a-44b3-a34a-16939ae1ccb9"). InnerVolumeSpecName "kube-api-access-x74pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:42:38 crc kubenswrapper[4834]: I0121 14:42:38.948190 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" (UID: "f7c59b7c-e36a-44b3-a34a-16939ae1ccb9"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:42:39 crc kubenswrapper[4834]: I0121 14:42:39.031553 4834 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 14:42:39 crc kubenswrapper[4834]: I0121 14:42:39.031603 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x74pn\" (UniqueName: \"kubernetes.io/projected/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9-kube-api-access-x74pn\") on node \"crc\" DevicePath \"\"" Jan 21 14:42:39 crc kubenswrapper[4834]: I0121 14:42:39.631121 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nfdv4" event={"ID":"f7c59b7c-e36a-44b3-a34a-16939ae1ccb9","Type":"ContainerDied","Data":"9ed31be39975a2a0f2a72b91f7ece40586decd38aa556b916c78be9d1158e967"} Jan 21 14:42:39 crc kubenswrapper[4834]: I0121 14:42:39.631185 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ed31be39975a2a0f2a72b91f7ece40586decd38aa556b916c78be9d1158e967" Jan 21 14:42:39 crc kubenswrapper[4834]: I0121 14:42:39.631212 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nfdv4" Jan 21 14:42:45 crc kubenswrapper[4834]: I0121 14:42:45.911121 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh"] Jan 21 14:42:45 crc kubenswrapper[4834]: E0121 14:42:45.911769 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" containerName="storage" Jan 21 14:42:45 crc kubenswrapper[4834]: I0121 14:42:45.911788 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" containerName="storage" Jan 21 14:42:45 crc kubenswrapper[4834]: I0121 14:42:45.911963 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" containerName="storage" Jan 21 14:42:45 crc kubenswrapper[4834]: I0121 14:42:45.913222 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:45 crc kubenswrapper[4834]: I0121 14:42:45.916078 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 14:42:45 crc kubenswrapper[4834]: I0121 14:42:45.926085 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh"] Jan 21 14:42:45 crc kubenswrapper[4834]: I0121 14:42:45.938731 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:45 crc kubenswrapper[4834]: I0121 14:42:45.939195 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn69c\" (UniqueName: \"kubernetes.io/projected/a64a71e4-a7d7-4267-978c-48140c262706-kube-api-access-hn69c\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:45 crc kubenswrapper[4834]: I0121 14:42:45.939342 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.040479 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.040809 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn69c\" (UniqueName: \"kubernetes.io/projected/a64a71e4-a7d7-4267-978c-48140c262706-kube-api-access-hn69c\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.041008 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.041175 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.041381 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.064399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn69c\" (UniqueName: \"kubernetes.io/projected/a64a71e4-a7d7-4267-978c-48140c262706-kube-api-access-hn69c\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.232255 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.445834 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh"] Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.678265 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" event={"ID":"a64a71e4-a7d7-4267-978c-48140c262706","Type":"ContainerStarted","Data":"b80c99cba4a7592ab4ca3118427b50f35d3f40b4f2bfbabd296ffd19678bfa49"} Jan 21 14:42:46 crc kubenswrapper[4834]: I0121 14:42:46.678345 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" event={"ID":"a64a71e4-a7d7-4267-978c-48140c262706","Type":"ContainerStarted","Data":"adcf8c9632a97b9f7ffef5a6a36988606b22a295a5402bccb824316569daaa4b"} Jan 21 14:42:47 crc kubenswrapper[4834]: I0121 14:42:47.114540 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:42:47 crc kubenswrapper[4834]: I0121 14:42:47.114602 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:42:48 crc kubenswrapper[4834]: I0121 14:42:48.691860 4834 generic.go:334] "Generic (PLEG): container finished" podID="a64a71e4-a7d7-4267-978c-48140c262706" containerID="b80c99cba4a7592ab4ca3118427b50f35d3f40b4f2bfbabd296ffd19678bfa49" exitCode=0 Jan 21 14:42:48 crc kubenswrapper[4834]: I0121 14:42:48.692089 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" event={"ID":"a64a71e4-a7d7-4267-978c-48140c262706","Type":"ContainerDied","Data":"b80c99cba4a7592ab4ca3118427b50f35d3f40b4f2bfbabd296ffd19678bfa49"} Jan 21 14:42:53 crc kubenswrapper[4834]: I0121 14:42:53.726363 4834 generic.go:334] "Generic (PLEG): container finished" podID="a64a71e4-a7d7-4267-978c-48140c262706" containerID="651eb3e4ff3b7d4fa9568fe407f96eaa415b6020ead859ca9b9da94d5da8799f" exitCode=0 Jan 21 14:42:53 crc kubenswrapper[4834]: I0121 14:42:53.726478 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" event={"ID":"a64a71e4-a7d7-4267-978c-48140c262706","Type":"ContainerDied","Data":"651eb3e4ff3b7d4fa9568fe407f96eaa415b6020ead859ca9b9da94d5da8799f"} Jan 21 14:42:54 crc kubenswrapper[4834]: I0121 14:42:54.736998 4834 generic.go:334] "Generic (PLEG): container finished" podID="a64a71e4-a7d7-4267-978c-48140c262706" containerID="7a7c9d02548c200d6ff0b52d8fa0ada88574c8073b01d0603b97bff079f8a839" exitCode=0 Jan 21 14:42:54 crc kubenswrapper[4834]: I0121 14:42:54.737086 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" event={"ID":"a64a71e4-a7d7-4267-978c-48140c262706","Type":"ContainerDied","Data":"7a7c9d02548c200d6ff0b52d8fa0ada88574c8073b01d0603b97bff079f8a839"} Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.049564 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.101647 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn69c\" (UniqueName: \"kubernetes.io/projected/a64a71e4-a7d7-4267-978c-48140c262706-kube-api-access-hn69c\") pod \"a64a71e4-a7d7-4267-978c-48140c262706\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.101758 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-util\") pod \"a64a71e4-a7d7-4267-978c-48140c262706\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.101823 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-bundle\") pod \"a64a71e4-a7d7-4267-978c-48140c262706\" (UID: \"a64a71e4-a7d7-4267-978c-48140c262706\") " Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.102863 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-bundle" (OuterVolumeSpecName: "bundle") pod "a64a71e4-a7d7-4267-978c-48140c262706" (UID: "a64a71e4-a7d7-4267-978c-48140c262706"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.110267 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64a71e4-a7d7-4267-978c-48140c262706-kube-api-access-hn69c" (OuterVolumeSpecName: "kube-api-access-hn69c") pod "a64a71e4-a7d7-4267-978c-48140c262706" (UID: "a64a71e4-a7d7-4267-978c-48140c262706"). InnerVolumeSpecName "kube-api-access-hn69c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.122851 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-util" (OuterVolumeSpecName: "util") pod "a64a71e4-a7d7-4267-978c-48140c262706" (UID: "a64a71e4-a7d7-4267-978c-48140c262706"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.202741 4834 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.202784 4834 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a64a71e4-a7d7-4267-978c-48140c262706-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.202798 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn69c\" (UniqueName: \"kubernetes.io/projected/a64a71e4-a7d7-4267-978c-48140c262706-kube-api-access-hn69c\") on node \"crc\" DevicePath \"\"" Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.753532 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" event={"ID":"a64a71e4-a7d7-4267-978c-48140c262706","Type":"ContainerDied","Data":"adcf8c9632a97b9f7ffef5a6a36988606b22a295a5402bccb824316569daaa4b"} Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.753587 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adcf8c9632a97b9f7ffef5a6a36988606b22a295a5402bccb824316569daaa4b" Jan 21 14:42:56 crc kubenswrapper[4834]: I0121 14:42:56.753691 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.333871 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fv8ww"] Jan 21 14:43:02 crc kubenswrapper[4834]: E0121 14:43:02.334739 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64a71e4-a7d7-4267-978c-48140c262706" containerName="pull" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.334754 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64a71e4-a7d7-4267-978c-48140c262706" containerName="pull" Jan 21 14:43:02 crc kubenswrapper[4834]: E0121 14:43:02.334762 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64a71e4-a7d7-4267-978c-48140c262706" containerName="extract" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.334769 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64a71e4-a7d7-4267-978c-48140c262706" containerName="extract" Jan 21 14:43:02 crc kubenswrapper[4834]: E0121 14:43:02.334794 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64a71e4-a7d7-4267-978c-48140c262706" containerName="util" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.334802 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64a71e4-a7d7-4267-978c-48140c262706" containerName="util" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.334914 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64a71e4-a7d7-4267-978c-48140c262706" containerName="extract" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.335428 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fv8ww" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.339471 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.339601 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.341510 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-z6cwz" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.355485 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fv8ww"] Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.414501 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gvz8\" (UniqueName: \"kubernetes.io/projected/f2d9a779-b241-41cd-b261-9f437b8cac1f-kube-api-access-4gvz8\") pod \"nmstate-operator-646758c888-fv8ww\" (UID: \"f2d9a779-b241-41cd-b261-9f437b8cac1f\") " pod="openshift-nmstate/nmstate-operator-646758c888-fv8ww" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.515752 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gvz8\" (UniqueName: \"kubernetes.io/projected/f2d9a779-b241-41cd-b261-9f437b8cac1f-kube-api-access-4gvz8\") pod \"nmstate-operator-646758c888-fv8ww\" (UID: \"f2d9a779-b241-41cd-b261-9f437b8cac1f\") " pod="openshift-nmstate/nmstate-operator-646758c888-fv8ww" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.535656 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gvz8\" (UniqueName: \"kubernetes.io/projected/f2d9a779-b241-41cd-b261-9f437b8cac1f-kube-api-access-4gvz8\") pod \"nmstate-operator-646758c888-fv8ww\" (UID: \"f2d9a779-b241-41cd-b261-9f437b8cac1f\") " pod="openshift-nmstate/nmstate-operator-646758c888-fv8ww" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.658852 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fv8ww" Jan 21 14:43:02 crc kubenswrapper[4834]: I0121 14:43:02.890281 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fv8ww"] Jan 21 14:43:03 crc kubenswrapper[4834]: I0121 14:43:03.804440 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fv8ww" event={"ID":"f2d9a779-b241-41cd-b261-9f437b8cac1f","Type":"ContainerStarted","Data":"bf734722ac7286852fdbf0437fe34b5d4b5a191945ea784abfc4fecefb4f7486"} Jan 21 14:43:06 crc kubenswrapper[4834]: I0121 14:43:06.829079 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fv8ww" event={"ID":"f2d9a779-b241-41cd-b261-9f437b8cac1f","Type":"ContainerStarted","Data":"8fe1f87032226a63c2804d549a8c40b17060684566fb3210e05cc4c8231a80ca"} Jan 21 14:43:06 crc kubenswrapper[4834]: I0121 14:43:06.853285 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-fv8ww" podStartSLOduration=1.3390240310000001 podStartE2EDuration="4.853254919s" podCreationTimestamp="2026-01-21 14:43:02 +0000 UTC" firstStartedPulling="2026-01-21 14:43:02.904804189 +0000 UTC m=+728.879168224" lastFinishedPulling="2026-01-21 14:43:06.419050067 +0000 UTC m=+732.393399112" observedRunningTime="2026-01-21 14:43:06.852579328 +0000 UTC m=+732.826928373" watchObservedRunningTime="2026-01-21 14:43:06.853254919 +0000 UTC m=+732.827603964" Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.851910 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b69n7"] Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.852991 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" Jan 21 14:43:07 crc kubenswrapper[4834]: W0121 14:43:07.855191 4834 reflector.go:561] object-"openshift-nmstate"/"nmstate-handler-dockercfg-twtj6": failed to list *v1.Secret: secrets "nmstate-handler-dockercfg-twtj6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Jan 21 14:43:07 crc kubenswrapper[4834]: E0121 14:43:07.855737 4834 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-handler-dockercfg-twtj6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nmstate-handler-dockercfg-twtj6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.877301 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw"] Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.878211 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.887597 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.893325 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2hj92"] Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.894094 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.908714 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkws5\" (UniqueName: \"kubernetes.io/projected/f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc-kube-api-access-zkws5\") pod \"nmstate-metrics-54757c584b-b69n7\" (UID: \"f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.922171 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b69n7"] Jan 21 14:43:07 crc kubenswrapper[4834]: I0121 14:43:07.939016 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw"] Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.009916 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whp84\" (UniqueName: \"kubernetes.io/projected/dc2d6e5b-0933-409b-8934-cec8c98f5f7a-kube-api-access-whp84\") pod \"nmstate-webhook-8474b5b9d8-dkwnw\" (UID: \"dc2d6e5b-0933-409b-8934-cec8c98f5f7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.009985 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ea294a4-02f7-4dcc-9127-12ed01d12b40-nmstate-lock\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.010077 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dc2d6e5b-0933-409b-8934-cec8c98f5f7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dkwnw\" (UID: \"dc2d6e5b-0933-409b-8934-cec8c98f5f7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.010191 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkws5\" (UniqueName: \"kubernetes.io/projected/f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc-kube-api-access-zkws5\") pod \"nmstate-metrics-54757c584b-b69n7\" (UID: \"f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.010278 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ea294a4-02f7-4dcc-9127-12ed01d12b40-dbus-socket\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.010334 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ea294a4-02f7-4dcc-9127-12ed01d12b40-ovs-socket\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.010360 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9n6w\" (UniqueName: \"kubernetes.io/projected/9ea294a4-02f7-4dcc-9127-12ed01d12b40-kube-api-access-m9n6w\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.012605 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z"] Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.013538 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.015723 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-75q8v" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.016624 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.016993 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.058554 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z"] Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.063912 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkws5\" (UniqueName: \"kubernetes.io/projected/f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc-kube-api-access-zkws5\") pod \"nmstate-metrics-54757c584b-b69n7\" (UID: \"f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.112406 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74043b9-a7ba-40e4-9263-2e093fe9e7a6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-jnt5z\" (UID: \"c74043b9-a7ba-40e4-9263-2e093fe9e7a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.112513 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ea294a4-02f7-4dcc-9127-12ed01d12b40-ovs-socket\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.112544 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9n6w\" (UniqueName: \"kubernetes.io/projected/9ea294a4-02f7-4dcc-9127-12ed01d12b40-kube-api-access-m9n6w\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.112599 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whp84\" (UniqueName: \"kubernetes.io/projected/dc2d6e5b-0933-409b-8934-cec8c98f5f7a-kube-api-access-whp84\") pod \"nmstate-webhook-8474b5b9d8-dkwnw\" (UID: \"dc2d6e5b-0933-409b-8934-cec8c98f5f7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.112624 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ea294a4-02f7-4dcc-9127-12ed01d12b40-nmstate-lock\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.112671 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dc2d6e5b-0933-409b-8934-cec8c98f5f7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dkwnw\" (UID: \"dc2d6e5b-0933-409b-8934-cec8c98f5f7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.112676 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9ea294a4-02f7-4dcc-9127-12ed01d12b40-ovs-socket\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.112690 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c74043b9-a7ba-40e4-9263-2e093fe9e7a6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-jnt5z\" (UID: \"c74043b9-a7ba-40e4-9263-2e093fe9e7a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.112858 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfvk\" (UniqueName: \"kubernetes.io/projected/c74043b9-a7ba-40e4-9263-2e093fe9e7a6-kube-api-access-rlfvk\") pod \"nmstate-console-plugin-7754f76f8b-jnt5z\" (UID: \"c74043b9-a7ba-40e4-9263-2e093fe9e7a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.113047 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ea294a4-02f7-4dcc-9127-12ed01d12b40-dbus-socket\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.113526 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9ea294a4-02f7-4dcc-9127-12ed01d12b40-dbus-socket\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.113705 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9ea294a4-02f7-4dcc-9127-12ed01d12b40-nmstate-lock\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.120049 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dc2d6e5b-0933-409b-8934-cec8c98f5f7a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-dkwnw\" (UID: \"dc2d6e5b-0933-409b-8934-cec8c98f5f7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.136124 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whp84\" (UniqueName: \"kubernetes.io/projected/dc2d6e5b-0933-409b-8934-cec8c98f5f7a-kube-api-access-whp84\") pod \"nmstate-webhook-8474b5b9d8-dkwnw\" (UID: \"dc2d6e5b-0933-409b-8934-cec8c98f5f7a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.136228 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9n6w\" (UniqueName: \"kubernetes.io/projected/9ea294a4-02f7-4dcc-9127-12ed01d12b40-kube-api-access-m9n6w\") pod \"nmstate-handler-2hj92\" (UID: \"9ea294a4-02f7-4dcc-9127-12ed01d12b40\") " pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.214131 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c74043b9-a7ba-40e4-9263-2e093fe9e7a6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-jnt5z\" (UID: \"c74043b9-a7ba-40e4-9263-2e093fe9e7a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.214203 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlfvk\" (UniqueName: \"kubernetes.io/projected/c74043b9-a7ba-40e4-9263-2e093fe9e7a6-kube-api-access-rlfvk\") pod \"nmstate-console-plugin-7754f76f8b-jnt5z\" (UID: \"c74043b9-a7ba-40e4-9263-2e093fe9e7a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.214255 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74043b9-a7ba-40e4-9263-2e093fe9e7a6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-jnt5z\" (UID: \"c74043b9-a7ba-40e4-9263-2e093fe9e7a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.215807 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c74043b9-a7ba-40e4-9263-2e093fe9e7a6-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-jnt5z\" (UID: \"c74043b9-a7ba-40e4-9263-2e093fe9e7a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.221227 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c74043b9-a7ba-40e4-9263-2e093fe9e7a6-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-jnt5z\" (UID: \"c74043b9-a7ba-40e4-9263-2e093fe9e7a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.232108 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlfvk\" (UniqueName: \"kubernetes.io/projected/c74043b9-a7ba-40e4-9263-2e093fe9e7a6-kube-api-access-rlfvk\") pod \"nmstate-console-plugin-7754f76f8b-jnt5z\" (UID: \"c74043b9-a7ba-40e4-9263-2e093fe9e7a6\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.235964 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5f5d66fd8f-wgk4s"] Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.236719 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.305305 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f5d66fd8f-wgk4s"] Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.315728 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-trusted-ca-bundle\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.315794 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6915251-d587-47a3-9e13-1b2aeb98b1e9-console-oauth-config\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.315821 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6915251-d587-47a3-9e13-1b2aeb98b1e9-console-serving-cert\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.315871 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-oauth-serving-cert\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.316049 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-service-ca\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.316156 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-console-config\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.316222 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4xzb\" (UniqueName: \"kubernetes.io/projected/b6915251-d587-47a3-9e13-1b2aeb98b1e9-kube-api-access-m4xzb\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.329861 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.417984 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-service-ca\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.418043 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-console-config\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.418063 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4xzb\" (UniqueName: \"kubernetes.io/projected/b6915251-d587-47a3-9e13-1b2aeb98b1e9-kube-api-access-m4xzb\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.418142 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-trusted-ca-bundle\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.418186 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6915251-d587-47a3-9e13-1b2aeb98b1e9-console-oauth-config\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.418208 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6915251-d587-47a3-9e13-1b2aeb98b1e9-console-serving-cert\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.418240 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-oauth-serving-cert\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.419307 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-oauth-serving-cert\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.420945 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-service-ca\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.421878 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-console-config\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.422458 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6915251-d587-47a3-9e13-1b2aeb98b1e9-console-oauth-config\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.423453 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6915251-d587-47a3-9e13-1b2aeb98b1e9-console-serving-cert\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.424202 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6915251-d587-47a3-9e13-1b2aeb98b1e9-trusted-ca-bundle\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.440923 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4xzb\" (UniqueName: \"kubernetes.io/projected/b6915251-d587-47a3-9e13-1b2aeb98b1e9-kube-api-access-m4xzb\") pod \"console-5f5d66fd8f-wgk4s\" (UID: \"b6915251-d587-47a3-9e13-1b2aeb98b1e9\") " pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.559865 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.655094 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z"] Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.790552 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5f5d66fd8f-wgk4s"] Jan 21 14:43:08 crc kubenswrapper[4834]: W0121 14:43:08.798169 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6915251_d587_47a3_9e13_1b2aeb98b1e9.slice/crio-35fd623dca7293104ac61e2f0df06c4a216b4826295370f381662ba74dd18fba WatchSource:0}: Error finding container 35fd623dca7293104ac61e2f0df06c4a216b4826295370f381662ba74dd18fba: Status 404 returned error can't find the container with id 35fd623dca7293104ac61e2f0df06c4a216b4826295370f381662ba74dd18fba Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.844249 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f5d66fd8f-wgk4s" event={"ID":"b6915251-d587-47a3-9e13-1b2aeb98b1e9","Type":"ContainerStarted","Data":"35fd623dca7293104ac61e2f0df06c4a216b4826295370f381662ba74dd18fba"} Jan 21 14:43:08 crc kubenswrapper[4834]: I0121 14:43:08.845131 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" event={"ID":"c74043b9-a7ba-40e4-9263-2e093fe9e7a6","Type":"ContainerStarted","Data":"4d9138b59770a5d5d8ca14c818b05ee1a27a63f98c55f47b59c3623094bb1013"} Jan 21 14:43:09 crc kubenswrapper[4834]: I0121 14:43:09.168244 4834 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 21 14:43:09 crc kubenswrapper[4834]: I0121 14:43:09.168321 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" Jan 21 14:43:09 crc kubenswrapper[4834]: I0121 14:43:09.202461 4834 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 21 14:43:09 crc kubenswrapper[4834]: I0121 14:43:09.202559 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:09 crc kubenswrapper[4834]: I0121 14:43:09.215345 4834 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-nmstate/nmstate-handler-2hj92" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 21 14:43:09 crc kubenswrapper[4834]: I0121 14:43:09.215440 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:09 crc kubenswrapper[4834]: W0121 14:43:09.254368 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ea294a4_02f7_4dcc_9127_12ed01d12b40.slice/crio-dd96443d32c37692f3fa492058f25573ed6d1eb4a54c2af57cad6a673789bc07 WatchSource:0}: Error finding container dd96443d32c37692f3fa492058f25573ed6d1eb4a54c2af57cad6a673789bc07: Status 404 returned error can't find the container with id dd96443d32c37692f3fa492058f25573ed6d1eb4a54c2af57cad6a673789bc07 Jan 21 14:43:09 crc kubenswrapper[4834]: I0121 14:43:09.417134 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-twtj6" Jan 21 14:43:09 crc kubenswrapper[4834]: I0121 14:43:09.852485 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5f5d66fd8f-wgk4s" event={"ID":"b6915251-d587-47a3-9e13-1b2aeb98b1e9","Type":"ContainerStarted","Data":"69888ca05488ea63d81fb48559812250f8dbdff4a4e7c20baaf20d95a52c2884"} Jan 21 14:43:09 crc kubenswrapper[4834]: I0121 14:43:09.855290 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2hj92" event={"ID":"9ea294a4-02f7-4dcc-9127-12ed01d12b40","Type":"ContainerStarted","Data":"dd96443d32c37692f3fa492058f25573ed6d1eb4a54c2af57cad6a673789bc07"} Jan 21 14:43:10 crc kubenswrapper[4834]: I0121 14:43:10.047050 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5f5d66fd8f-wgk4s" podStartSLOduration=2.047033832 podStartE2EDuration="2.047033832s" podCreationTimestamp="2026-01-21 14:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:43:09.885217857 +0000 UTC m=+735.859566912" watchObservedRunningTime="2026-01-21 14:43:10.047033832 +0000 UTC m=+736.021382877" Jan 21 14:43:10 crc kubenswrapper[4834]: I0121 14:43:10.050910 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw"] Jan 21 14:43:10 crc kubenswrapper[4834]: W0121 14:43:10.061873 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc2d6e5b_0933_409b_8934_cec8c98f5f7a.slice/crio-393e073e3c8189fddd350c502c2b86a99f9cdcd123dac3e7c5d7f3103531f8fb WatchSource:0}: Error finding container 393e073e3c8189fddd350c502c2b86a99f9cdcd123dac3e7c5d7f3103531f8fb: Status 404 returned error can't find the container with id 393e073e3c8189fddd350c502c2b86a99f9cdcd123dac3e7c5d7f3103531f8fb Jan 21 14:43:10 crc kubenswrapper[4834]: I0121 14:43:10.142321 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b69n7"] Jan 21 14:43:10 crc kubenswrapper[4834]: I0121 14:43:10.862695 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" event={"ID":"f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc","Type":"ContainerStarted","Data":"5621407ff839077a2de1cf922f5cdf7d73fbd4d8d3211d28b1b6e012a3ee7cfd"} Jan 21 14:43:10 crc kubenswrapper[4834]: I0121 14:43:10.863830 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" event={"ID":"dc2d6e5b-0933-409b-8934-cec8c98f5f7a","Type":"ContainerStarted","Data":"393e073e3c8189fddd350c502c2b86a99f9cdcd123dac3e7c5d7f3103531f8fb"} Jan 21 14:43:17 crc kubenswrapper[4834]: I0121 14:43:17.114298 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:43:17 crc kubenswrapper[4834]: I0121 14:43:17.115127 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:43:17 crc kubenswrapper[4834]: I0121 14:43:17.910366 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2hj92" event={"ID":"9ea294a4-02f7-4dcc-9127-12ed01d12b40","Type":"ContainerStarted","Data":"4539db2e4cfbbc433f8f9f0a4ac90b0cc79efe8f97401eff635488dd0a5ee08d"} Jan 21 14:43:17 crc kubenswrapper[4834]: I0121 14:43:17.911160 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:17 crc kubenswrapper[4834]: I0121 14:43:17.912536 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" event={"ID":"f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc","Type":"ContainerStarted","Data":"aec2365459455f34903706fdc88817f1ffbdb64c79c1c0997a20580b44bf394a"} Jan 21 14:43:17 crc kubenswrapper[4834]: I0121 14:43:17.931397 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2hj92" podStartSLOduration=3.8491593870000003 podStartE2EDuration="10.931379661s" podCreationTimestamp="2026-01-21 14:43:07 +0000 UTC" firstStartedPulling="2026-01-21 14:43:09.260132859 +0000 UTC m=+735.234481904" lastFinishedPulling="2026-01-21 14:43:16.342353133 +0000 UTC m=+742.316702178" observedRunningTime="2026-01-21 14:43:17.928571465 +0000 UTC m=+743.902920510" watchObservedRunningTime="2026-01-21 14:43:17.931379661 +0000 UTC m=+743.905728706" Jan 21 14:43:18 crc kubenswrapper[4834]: I0121 14:43:18.560990 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:18 crc kubenswrapper[4834]: I0121 14:43:18.561060 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:18 crc kubenswrapper[4834]: I0121 14:43:18.568424 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:18 crc kubenswrapper[4834]: I0121 14:43:18.923728 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5f5d66fd8f-wgk4s" Jan 21 14:43:18 crc kubenswrapper[4834]: I0121 14:43:18.987215 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vzwpb"] Jan 21 14:43:21 crc kubenswrapper[4834]: I0121 14:43:21.946945 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" event={"ID":"dc2d6e5b-0933-409b-8934-cec8c98f5f7a","Type":"ContainerStarted","Data":"586ac49d2e6c33b72258d1b789ea4d4507a7490a7751164ef5c7adde2180824e"} Jan 21 14:43:22 crc kubenswrapper[4834]: I0121 14:43:22.952307 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:22 crc kubenswrapper[4834]: I0121 14:43:22.972320 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" podStartSLOduration=4.468312385 podStartE2EDuration="15.972305026s" podCreationTimestamp="2026-01-21 14:43:07 +0000 UTC" firstStartedPulling="2026-01-21 14:43:10.064720458 +0000 UTC m=+736.039069523" lastFinishedPulling="2026-01-21 14:43:21.568713119 +0000 UTC m=+747.543062164" observedRunningTime="2026-01-21 14:43:22.971637266 +0000 UTC m=+748.945986321" watchObservedRunningTime="2026-01-21 14:43:22.972305026 +0000 UTC m=+748.946654071" Jan 21 14:43:24 crc kubenswrapper[4834]: I0121 14:43:24.256711 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2hj92" Jan 21 14:43:24 crc kubenswrapper[4834]: I0121 14:43:24.964575 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" event={"ID":"c74043b9-a7ba-40e4-9263-2e093fe9e7a6","Type":"ContainerStarted","Data":"4610467e4e82717b699bb52f9e6ae7a6aafe5f1b3a9fed77ad36e739fce03be7"} Jan 21 14:43:24 crc kubenswrapper[4834]: I0121 14:43:24.989178 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-jnt5z" podStartSLOduration=2.011715397 podStartE2EDuration="17.989098901s" podCreationTimestamp="2026-01-21 14:43:07 +0000 UTC" firstStartedPulling="2026-01-21 14:43:08.665282316 +0000 UTC m=+734.639631361" lastFinishedPulling="2026-01-21 14:43:24.64266582 +0000 UTC m=+750.617014865" observedRunningTime="2026-01-21 14:43:24.983320947 +0000 UTC m=+750.957669992" watchObservedRunningTime="2026-01-21 14:43:24.989098901 +0000 UTC m=+750.963447956" Jan 21 14:43:25 crc kubenswrapper[4834]: I0121 14:43:25.973812 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" event={"ID":"f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc","Type":"ContainerStarted","Data":"1be3a417fa2b6fd65892f2575d9e04962faba32cf33ee54b8c850d4a0257027a"} Jan 21 14:43:25 crc kubenswrapper[4834]: I0121 14:43:25.993746 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-b69n7" podStartSLOduration=4.293324811 podStartE2EDuration="18.993716985s" podCreationTimestamp="2026-01-21 14:43:07 +0000 UTC" firstStartedPulling="2026-01-21 14:43:10.151925331 +0000 UTC m=+736.126274376" lastFinishedPulling="2026-01-21 14:43:24.852317515 +0000 UTC m=+750.826666550" observedRunningTime="2026-01-21 14:43:25.991147667 +0000 UTC m=+751.965496732" watchObservedRunningTime="2026-01-21 14:43:25.993716985 +0000 UTC m=+751.968066050" Jan 21 14:43:38 crc kubenswrapper[4834]: I0121 14:43:38.802477 4834 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 14:43:39 crc kubenswrapper[4834]: I0121 14:43:39.209434 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-dkwnw" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.046763 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vzwpb" podUID="92411afe-95fe-481a-ac22-4a411f4ff7f3" containerName="console" containerID="cri-o://2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc" gracePeriod=15 Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.437357 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vzwpb_92411afe-95fe-481a-ac22-4a411f4ff7f3/console/0.log" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.437831 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.569401 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-oauth-config\") pod \"92411afe-95fe-481a-ac22-4a411f4ff7f3\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.569546 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wml69\" (UniqueName: \"kubernetes.io/projected/92411afe-95fe-481a-ac22-4a411f4ff7f3-kube-api-access-wml69\") pod \"92411afe-95fe-481a-ac22-4a411f4ff7f3\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.569576 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-serving-cert\") pod \"92411afe-95fe-481a-ac22-4a411f4ff7f3\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.569612 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-service-ca\") pod \"92411afe-95fe-481a-ac22-4a411f4ff7f3\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.569633 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-config\") pod \"92411afe-95fe-481a-ac22-4a411f4ff7f3\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.569700 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-trusted-ca-bundle\") pod \"92411afe-95fe-481a-ac22-4a411f4ff7f3\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.569760 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-oauth-serving-cert\") pod \"92411afe-95fe-481a-ac22-4a411f4ff7f3\" (UID: \"92411afe-95fe-481a-ac22-4a411f4ff7f3\") " Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.570719 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "92411afe-95fe-481a-ac22-4a411f4ff7f3" (UID: "92411afe-95fe-481a-ac22-4a411f4ff7f3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.571405 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "92411afe-95fe-481a-ac22-4a411f4ff7f3" (UID: "92411afe-95fe-481a-ac22-4a411f4ff7f3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.571411 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-service-ca" (OuterVolumeSpecName: "service-ca") pod "92411afe-95fe-481a-ac22-4a411f4ff7f3" (UID: "92411afe-95fe-481a-ac22-4a411f4ff7f3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.571962 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-config" (OuterVolumeSpecName: "console-config") pod "92411afe-95fe-481a-ac22-4a411f4ff7f3" (UID: "92411afe-95fe-481a-ac22-4a411f4ff7f3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.577498 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92411afe-95fe-481a-ac22-4a411f4ff7f3-kube-api-access-wml69" (OuterVolumeSpecName: "kube-api-access-wml69") pod "92411afe-95fe-481a-ac22-4a411f4ff7f3" (UID: "92411afe-95fe-481a-ac22-4a411f4ff7f3"). InnerVolumeSpecName "kube-api-access-wml69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.578597 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "92411afe-95fe-481a-ac22-4a411f4ff7f3" (UID: "92411afe-95fe-481a-ac22-4a411f4ff7f3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.585576 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "92411afe-95fe-481a-ac22-4a411f4ff7f3" (UID: "92411afe-95fe-481a-ac22-4a411f4ff7f3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.671088 4834 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.671132 4834 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.671147 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wml69\" (UniqueName: \"kubernetes.io/projected/92411afe-95fe-481a-ac22-4a411f4ff7f3-kube-api-access-wml69\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.671162 4834 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.671201 4834 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.671214 4834 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:44 crc kubenswrapper[4834]: I0121 14:43:44.671229 4834 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92411afe-95fe-481a-ac22-4a411f4ff7f3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.106527 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vzwpb_92411afe-95fe-481a-ac22-4a411f4ff7f3/console/0.log" Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.106599 4834 generic.go:334] "Generic (PLEG): container finished" podID="92411afe-95fe-481a-ac22-4a411f4ff7f3" containerID="2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc" exitCode=2 Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.106653 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vzwpb" event={"ID":"92411afe-95fe-481a-ac22-4a411f4ff7f3","Type":"ContainerDied","Data":"2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc"} Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.106699 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vzwpb" event={"ID":"92411afe-95fe-481a-ac22-4a411f4ff7f3","Type":"ContainerDied","Data":"e6859c11940879904507147f98a70bb90f082cdd43bad514a4091259c80f0ddd"} Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.106732 4834 scope.go:117] "RemoveContainer" containerID="2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc" Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.106759 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vzwpb" Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.133347 4834 scope.go:117] "RemoveContainer" containerID="2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc" Jan 21 14:43:45 crc kubenswrapper[4834]: E0121 14:43:45.134171 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc\": container with ID starting with 2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc not found: ID does not exist" containerID="2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc" Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.134284 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc"} err="failed to get container status \"2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc\": rpc error: code = NotFound desc = could not find container \"2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc\": container with ID starting with 2a21a8239c77e6471b920295ef68bc1fbc4d9aa551f6d50ce1577c3df750e2bc not found: ID does not exist" Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.155394 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vzwpb"] Jan 21 14:43:45 crc kubenswrapper[4834]: I0121 14:43:45.161098 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vzwpb"] Jan 21 14:43:46 crc kubenswrapper[4834]: I0121 14:43:46.335565 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92411afe-95fe-481a-ac22-4a411f4ff7f3" path="/var/lib/kubelet/pods/92411afe-95fe-481a-ac22-4a411f4ff7f3/volumes" Jan 21 14:43:47 crc kubenswrapper[4834]: I0121 14:43:47.114288 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:43:47 crc kubenswrapper[4834]: I0121 14:43:47.115366 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:43:47 crc kubenswrapper[4834]: I0121 14:43:47.115431 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:43:47 crc kubenswrapper[4834]: I0121 14:43:47.117828 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ceefd8d5119722787eeefef2e7cfbf9c09721033eaf242427e38e33ba55b1fc5"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:43:47 crc kubenswrapper[4834]: I0121 14:43:47.117917 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://ceefd8d5119722787eeefef2e7cfbf9c09721033eaf242427e38e33ba55b1fc5" gracePeriod=600 Jan 21 14:43:48 crc kubenswrapper[4834]: I0121 14:43:48.146458 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="ceefd8d5119722787eeefef2e7cfbf9c09721033eaf242427e38e33ba55b1fc5" exitCode=0 Jan 21 14:43:48 crc kubenswrapper[4834]: I0121 14:43:48.146570 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"ceefd8d5119722787eeefef2e7cfbf9c09721033eaf242427e38e33ba55b1fc5"} Jan 21 14:43:48 crc kubenswrapper[4834]: I0121 14:43:48.147443 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"c6a0e2c89db9c973dfdd15d51e7113160968bb3b5a4f9316daef39ec270ba9ad"} Jan 21 14:43:48 crc kubenswrapper[4834]: I0121 14:43:48.147485 4834 scope.go:117] "RemoveContainer" containerID="51804d2d0a0e2a55ee63963a3000b962787b6703fde6673f242f539ad62b2efb" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.040936 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8"] Jan 21 14:43:53 crc kubenswrapper[4834]: E0121 14:43:53.042130 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92411afe-95fe-481a-ac22-4a411f4ff7f3" containerName="console" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.042150 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="92411afe-95fe-481a-ac22-4a411f4ff7f3" containerName="console" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.042268 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="92411afe-95fe-481a-ac22-4a411f4ff7f3" containerName="console" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.043201 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.045551 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.051577 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8"] Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.120584 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.120678 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rptw4\" (UniqueName: \"kubernetes.io/projected/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-kube-api-access-rptw4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.120802 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.222199 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.222277 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rptw4\" (UniqueName: \"kubernetes.io/projected/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-kube-api-access-rptw4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.222320 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.222851 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.222977 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.246621 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rptw4\" (UniqueName: \"kubernetes.io/projected/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-kube-api-access-rptw4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.363342 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:53 crc kubenswrapper[4834]: I0121 14:43:53.578511 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8"] Jan 21 14:43:54 crc kubenswrapper[4834]: I0121 14:43:54.192398 4834 generic.go:334] "Generic (PLEG): container finished" podID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerID="76eb781218eb56e6f413900d2d4e28dc809278213e7843c4e098f87c97615887" exitCode=0 Jan 21 14:43:54 crc kubenswrapper[4834]: I0121 14:43:54.192549 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" event={"ID":"f50d9192-f6ea-4d53-b1fc-ce8650f422ba","Type":"ContainerDied","Data":"76eb781218eb56e6f413900d2d4e28dc809278213e7843c4e098f87c97615887"} Jan 21 14:43:54 crc kubenswrapper[4834]: I0121 14:43:54.194765 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" event={"ID":"f50d9192-f6ea-4d53-b1fc-ce8650f422ba","Type":"ContainerStarted","Data":"e0ee4707b7b47f1b6bc60ef3314330f287e11bad48aa17ea9472dd3ce35760d9"} Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.365235 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-75j4q"] Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.367637 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.380663 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75j4q"] Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.470076 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-utilities\") pod \"redhat-operators-75j4q\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.470210 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-catalog-content\") pod \"redhat-operators-75j4q\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.470259 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqrwx\" (UniqueName: \"kubernetes.io/projected/d183af63-fac0-4cb9-b959-cbbbf58840af-kube-api-access-zqrwx\") pod \"redhat-operators-75j4q\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.571791 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqrwx\" (UniqueName: \"kubernetes.io/projected/d183af63-fac0-4cb9-b959-cbbbf58840af-kube-api-access-zqrwx\") pod \"redhat-operators-75j4q\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.572242 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-utilities\") pod \"redhat-operators-75j4q\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.572296 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-catalog-content\") pod \"redhat-operators-75j4q\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.572849 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-utilities\") pod \"redhat-operators-75j4q\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.572887 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-catalog-content\") pod \"redhat-operators-75j4q\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.599687 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqrwx\" (UniqueName: \"kubernetes.io/projected/d183af63-fac0-4cb9-b959-cbbbf58840af-kube-api-access-zqrwx\") pod \"redhat-operators-75j4q\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.702482 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:43:55 crc kubenswrapper[4834]: I0121 14:43:55.937780 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75j4q"] Jan 21 14:43:55 crc kubenswrapper[4834]: W0121 14:43:55.950181 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd183af63_fac0_4cb9_b959_cbbbf58840af.slice/crio-b579e3067bfedcdca96cefbc997b7dc129c73778a2cee0ca811547eaf2c50b4f WatchSource:0}: Error finding container b579e3067bfedcdca96cefbc997b7dc129c73778a2cee0ca811547eaf2c50b4f: Status 404 returned error can't find the container with id b579e3067bfedcdca96cefbc997b7dc129c73778a2cee0ca811547eaf2c50b4f Jan 21 14:43:56 crc kubenswrapper[4834]: I0121 14:43:56.209670 4834 generic.go:334] "Generic (PLEG): container finished" podID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerID="1108725365a984146ce688e6471c24bb9c9ce3af6a1e580ca73a206f567362c5" exitCode=0 Jan 21 14:43:56 crc kubenswrapper[4834]: I0121 14:43:56.209809 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" event={"ID":"f50d9192-f6ea-4d53-b1fc-ce8650f422ba","Type":"ContainerDied","Data":"1108725365a984146ce688e6471c24bb9c9ce3af6a1e580ca73a206f567362c5"} Jan 21 14:43:56 crc kubenswrapper[4834]: I0121 14:43:56.211906 4834 generic.go:334] "Generic (PLEG): container finished" podID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerID="c458a479fcfaeabd98997576502f6c322ba38eddf11ddc4acef0e6b42b3c9ab4" exitCode=0 Jan 21 14:43:56 crc kubenswrapper[4834]: I0121 14:43:56.211951 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75j4q" event={"ID":"d183af63-fac0-4cb9-b959-cbbbf58840af","Type":"ContainerDied","Data":"c458a479fcfaeabd98997576502f6c322ba38eddf11ddc4acef0e6b42b3c9ab4"} Jan 21 14:43:56 crc kubenswrapper[4834]: I0121 14:43:56.211971 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75j4q" event={"ID":"d183af63-fac0-4cb9-b959-cbbbf58840af","Type":"ContainerStarted","Data":"b579e3067bfedcdca96cefbc997b7dc129c73778a2cee0ca811547eaf2c50b4f"} Jan 21 14:43:57 crc kubenswrapper[4834]: I0121 14:43:57.221759 4834 generic.go:334] "Generic (PLEG): container finished" podID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerID="155fc54bbeb671bf99058e52de956f0a773fd12397cf750cb6729e86284186d7" exitCode=0 Jan 21 14:43:57 crc kubenswrapper[4834]: I0121 14:43:57.222034 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" event={"ID":"f50d9192-f6ea-4d53-b1fc-ce8650f422ba","Type":"ContainerDied","Data":"155fc54bbeb671bf99058e52de956f0a773fd12397cf750cb6729e86284186d7"} Jan 21 14:43:57 crc kubenswrapper[4834]: I0121 14:43:57.224704 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75j4q" event={"ID":"d183af63-fac0-4cb9-b959-cbbbf58840af","Type":"ContainerStarted","Data":"1f0d4ce80e5f1c229df5fc1cdc7e5112171b0d148acdbbd18f81da420b6c6200"} Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.736728 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.822042 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-bundle\") pod \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.822107 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rptw4\" (UniqueName: \"kubernetes.io/projected/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-kube-api-access-rptw4\") pod \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.822178 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-util\") pod \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\" (UID: \"f50d9192-f6ea-4d53-b1fc-ce8650f422ba\") " Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.823728 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-bundle" (OuterVolumeSpecName: "bundle") pod "f50d9192-f6ea-4d53-b1fc-ce8650f422ba" (UID: "f50d9192-f6ea-4d53-b1fc-ce8650f422ba"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.831652 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-kube-api-access-rptw4" (OuterVolumeSpecName: "kube-api-access-rptw4") pod "f50d9192-f6ea-4d53-b1fc-ce8650f422ba" (UID: "f50d9192-f6ea-4d53-b1fc-ce8650f422ba"). InnerVolumeSpecName "kube-api-access-rptw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.842276 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-util" (OuterVolumeSpecName: "util") pod "f50d9192-f6ea-4d53-b1fc-ce8650f422ba" (UID: "f50d9192-f6ea-4d53-b1fc-ce8650f422ba"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.924200 4834 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.924266 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rptw4\" (UniqueName: \"kubernetes.io/projected/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-kube-api-access-rptw4\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:58 crc kubenswrapper[4834]: I0121 14:43:58.924286 4834 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f50d9192-f6ea-4d53-b1fc-ce8650f422ba-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:43:59 crc kubenswrapper[4834]: I0121 14:43:59.242457 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" event={"ID":"f50d9192-f6ea-4d53-b1fc-ce8650f422ba","Type":"ContainerDied","Data":"e0ee4707b7b47f1b6bc60ef3314330f287e11bad48aa17ea9472dd3ce35760d9"} Jan 21 14:43:59 crc kubenswrapper[4834]: I0121 14:43:59.242518 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0ee4707b7b47f1b6bc60ef3314330f287e11bad48aa17ea9472dd3ce35760d9" Jan 21 14:43:59 crc kubenswrapper[4834]: I0121 14:43:59.242526 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8" Jan 21 14:44:00 crc kubenswrapper[4834]: I0121 14:44:00.253200 4834 generic.go:334] "Generic (PLEG): container finished" podID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerID="1f0d4ce80e5f1c229df5fc1cdc7e5112171b0d148acdbbd18f81da420b6c6200" exitCode=0 Jan 21 14:44:00 crc kubenswrapper[4834]: I0121 14:44:00.253267 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75j4q" event={"ID":"d183af63-fac0-4cb9-b959-cbbbf58840af","Type":"ContainerDied","Data":"1f0d4ce80e5f1c229df5fc1cdc7e5112171b0d148acdbbd18f81da420b6c6200"} Jan 21 14:44:01 crc kubenswrapper[4834]: I0121 14:44:01.265263 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75j4q" event={"ID":"d183af63-fac0-4cb9-b959-cbbbf58840af","Type":"ContainerStarted","Data":"217b0b455fdbe7090bd1c96cc43b801fc3695a9cb1f975e6833742f10dd57513"} Jan 21 14:44:01 crc kubenswrapper[4834]: I0121 14:44:01.286027 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-75j4q" podStartSLOduration=1.8233547780000001 podStartE2EDuration="6.286001992s" podCreationTimestamp="2026-01-21 14:43:55 +0000 UTC" firstStartedPulling="2026-01-21 14:43:56.213587206 +0000 UTC m=+782.187936251" lastFinishedPulling="2026-01-21 14:44:00.67623442 +0000 UTC m=+786.650583465" observedRunningTime="2026-01-21 14:44:01.282347333 +0000 UTC m=+787.256696388" watchObservedRunningTime="2026-01-21 14:44:01.286001992 +0000 UTC m=+787.260351037" Jan 21 14:44:05 crc kubenswrapper[4834]: I0121 14:44:05.702733 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:44:05 crc kubenswrapper[4834]: I0121 14:44:05.703269 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:44:06 crc kubenswrapper[4834]: I0121 14:44:06.805973 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-75j4q" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerName="registry-server" probeResult="failure" output=< Jan 21 14:44:06 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 14:44:06 crc kubenswrapper[4834]: > Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.334402 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x"] Jan 21 14:44:08 crc kubenswrapper[4834]: E0121 14:44:08.336262 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerName="extract" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.336388 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerName="extract" Jan 21 14:44:08 crc kubenswrapper[4834]: E0121 14:44:08.336462 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerName="pull" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.336529 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerName="pull" Jan 21 14:44:08 crc kubenswrapper[4834]: E0121 14:44:08.336610 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerName="util" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.336676 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerName="util" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.336879 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50d9192-f6ea-4d53-b1fc-ce8650f422ba" containerName="extract" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.337529 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.339502 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.341214 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.341289 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.341450 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.343783 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7c546" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.354547 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x"] Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.470808 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e030c1c-2b95-4ea1-a9be-91a707b92e15-webhook-cert\") pod \"metallb-operator-controller-manager-9f5f6b6d-8k84x\" (UID: \"2e030c1c-2b95-4ea1-a9be-91a707b92e15\") " pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.470915 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e030c1c-2b95-4ea1-a9be-91a707b92e15-apiservice-cert\") pod \"metallb-operator-controller-manager-9f5f6b6d-8k84x\" (UID: \"2e030c1c-2b95-4ea1-a9be-91a707b92e15\") " pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.470957 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc2lq\" (UniqueName: \"kubernetes.io/projected/2e030c1c-2b95-4ea1-a9be-91a707b92e15-kube-api-access-mc2lq\") pod \"metallb-operator-controller-manager-9f5f6b6d-8k84x\" (UID: \"2e030c1c-2b95-4ea1-a9be-91a707b92e15\") " pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.571808 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e030c1c-2b95-4ea1-a9be-91a707b92e15-webhook-cert\") pod \"metallb-operator-controller-manager-9f5f6b6d-8k84x\" (UID: \"2e030c1c-2b95-4ea1-a9be-91a707b92e15\") " pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.572190 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e030c1c-2b95-4ea1-a9be-91a707b92e15-apiservice-cert\") pod \"metallb-operator-controller-manager-9f5f6b6d-8k84x\" (UID: \"2e030c1c-2b95-4ea1-a9be-91a707b92e15\") " pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.572274 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc2lq\" (UniqueName: \"kubernetes.io/projected/2e030c1c-2b95-4ea1-a9be-91a707b92e15-kube-api-access-mc2lq\") pod \"metallb-operator-controller-manager-9f5f6b6d-8k84x\" (UID: \"2e030c1c-2b95-4ea1-a9be-91a707b92e15\") " pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.579083 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e030c1c-2b95-4ea1-a9be-91a707b92e15-apiservice-cert\") pod \"metallb-operator-controller-manager-9f5f6b6d-8k84x\" (UID: \"2e030c1c-2b95-4ea1-a9be-91a707b92e15\") " pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.579649 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e030c1c-2b95-4ea1-a9be-91a707b92e15-webhook-cert\") pod \"metallb-operator-controller-manager-9f5f6b6d-8k84x\" (UID: \"2e030c1c-2b95-4ea1-a9be-91a707b92e15\") " pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.609727 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc2lq\" (UniqueName: \"kubernetes.io/projected/2e030c1c-2b95-4ea1-a9be-91a707b92e15-kube-api-access-mc2lq\") pod \"metallb-operator-controller-manager-9f5f6b6d-8k84x\" (UID: \"2e030c1c-2b95-4ea1-a9be-91a707b92e15\") " pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.633192 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-794b596549-wm7g7"] Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.634100 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.638886 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.643550 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-fx6hm" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.643550 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.658612 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.674342 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-794b596549-wm7g7"] Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.775082 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09ca80fd-b29f-4eca-8ad2-28f29cb91e78-webhook-cert\") pod \"metallb-operator-webhook-server-794b596549-wm7g7\" (UID: \"09ca80fd-b29f-4eca-8ad2-28f29cb91e78\") " pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.775153 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09ca80fd-b29f-4eca-8ad2-28f29cb91e78-apiservice-cert\") pod \"metallb-operator-webhook-server-794b596549-wm7g7\" (UID: \"09ca80fd-b29f-4eca-8ad2-28f29cb91e78\") " pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.775238 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mc9t\" (UniqueName: \"kubernetes.io/projected/09ca80fd-b29f-4eca-8ad2-28f29cb91e78-kube-api-access-9mc9t\") pod \"metallb-operator-webhook-server-794b596549-wm7g7\" (UID: \"09ca80fd-b29f-4eca-8ad2-28f29cb91e78\") " pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.876861 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mc9t\" (UniqueName: \"kubernetes.io/projected/09ca80fd-b29f-4eca-8ad2-28f29cb91e78-kube-api-access-9mc9t\") pod \"metallb-operator-webhook-server-794b596549-wm7g7\" (UID: \"09ca80fd-b29f-4eca-8ad2-28f29cb91e78\") " pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.876955 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09ca80fd-b29f-4eca-8ad2-28f29cb91e78-webhook-cert\") pod \"metallb-operator-webhook-server-794b596549-wm7g7\" (UID: \"09ca80fd-b29f-4eca-8ad2-28f29cb91e78\") " pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.876988 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09ca80fd-b29f-4eca-8ad2-28f29cb91e78-apiservice-cert\") pod \"metallb-operator-webhook-server-794b596549-wm7g7\" (UID: \"09ca80fd-b29f-4eca-8ad2-28f29cb91e78\") " pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.881858 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09ca80fd-b29f-4eca-8ad2-28f29cb91e78-webhook-cert\") pod \"metallb-operator-webhook-server-794b596549-wm7g7\" (UID: \"09ca80fd-b29f-4eca-8ad2-28f29cb91e78\") " pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.893566 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09ca80fd-b29f-4eca-8ad2-28f29cb91e78-apiservice-cert\") pod \"metallb-operator-webhook-server-794b596549-wm7g7\" (UID: \"09ca80fd-b29f-4eca-8ad2-28f29cb91e78\") " pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.901135 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mc9t\" (UniqueName: \"kubernetes.io/projected/09ca80fd-b29f-4eca-8ad2-28f29cb91e78-kube-api-access-9mc9t\") pod \"metallb-operator-webhook-server-794b596549-wm7g7\" (UID: \"09ca80fd-b29f-4eca-8ad2-28f29cb91e78\") " pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:08 crc kubenswrapper[4834]: I0121 14:44:08.954249 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:09 crc kubenswrapper[4834]: I0121 14:44:09.247199 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x"] Jan 21 14:44:09 crc kubenswrapper[4834]: W0121 14:44:09.271656 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e030c1c_2b95_4ea1_a9be_91a707b92e15.slice/crio-f8e007b1c6c00b6f41449bcd7496a3fb88cb8f7ef719c6d2b25283a06190b400 WatchSource:0}: Error finding container f8e007b1c6c00b6f41449bcd7496a3fb88cb8f7ef719c6d2b25283a06190b400: Status 404 returned error can't find the container with id f8e007b1c6c00b6f41449bcd7496a3fb88cb8f7ef719c6d2b25283a06190b400 Jan 21 14:44:09 crc kubenswrapper[4834]: I0121 14:44:09.318952 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" event={"ID":"2e030c1c-2b95-4ea1-a9be-91a707b92e15","Type":"ContainerStarted","Data":"f8e007b1c6c00b6f41449bcd7496a3fb88cb8f7ef719c6d2b25283a06190b400"} Jan 21 14:44:09 crc kubenswrapper[4834]: I0121 14:44:09.375387 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-794b596549-wm7g7"] Jan 21 14:44:09 crc kubenswrapper[4834]: W0121 14:44:09.382419 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09ca80fd_b29f_4eca_8ad2_28f29cb91e78.slice/crio-a998135c6883f4605d87f7cf9a3a146a382bfdf20d856f078abaea3191c6f2de WatchSource:0}: Error finding container a998135c6883f4605d87f7cf9a3a146a382bfdf20d856f078abaea3191c6f2de: Status 404 returned error can't find the container with id a998135c6883f4605d87f7cf9a3a146a382bfdf20d856f078abaea3191c6f2de Jan 21 14:44:10 crc kubenswrapper[4834]: I0121 14:44:10.332165 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" event={"ID":"09ca80fd-b29f-4eca-8ad2-28f29cb91e78","Type":"ContainerStarted","Data":"a998135c6883f4605d87f7cf9a3a146a382bfdf20d856f078abaea3191c6f2de"} Jan 21 14:44:15 crc kubenswrapper[4834]: I0121 14:44:15.823055 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:44:15 crc kubenswrapper[4834]: I0121 14:44:15.923670 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:44:16 crc kubenswrapper[4834]: I0121 14:44:16.331739 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75j4q"] Jan 21 14:44:17 crc kubenswrapper[4834]: I0121 14:44:17.421919 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-75j4q" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerName="registry-server" containerID="cri-o://217b0b455fdbe7090bd1c96cc43b801fc3695a9cb1f975e6833742f10dd57513" gracePeriod=2 Jan 21 14:44:18 crc kubenswrapper[4834]: I0121 14:44:18.431423 4834 generic.go:334] "Generic (PLEG): container finished" podID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerID="217b0b455fdbe7090bd1c96cc43b801fc3695a9cb1f975e6833742f10dd57513" exitCode=0 Jan 21 14:44:18 crc kubenswrapper[4834]: I0121 14:44:18.431940 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75j4q" event={"ID":"d183af63-fac0-4cb9-b959-cbbbf58840af","Type":"ContainerDied","Data":"217b0b455fdbe7090bd1c96cc43b801fc3695a9cb1f975e6833742f10dd57513"} Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.277006 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.383028 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-utilities\") pod \"d183af63-fac0-4cb9-b959-cbbbf58840af\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.383077 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqrwx\" (UniqueName: \"kubernetes.io/projected/d183af63-fac0-4cb9-b959-cbbbf58840af-kube-api-access-zqrwx\") pod \"d183af63-fac0-4cb9-b959-cbbbf58840af\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.383133 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-catalog-content\") pod \"d183af63-fac0-4cb9-b959-cbbbf58840af\" (UID: \"d183af63-fac0-4cb9-b959-cbbbf58840af\") " Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.384727 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-utilities" (OuterVolumeSpecName: "utilities") pod "d183af63-fac0-4cb9-b959-cbbbf58840af" (UID: "d183af63-fac0-4cb9-b959-cbbbf58840af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.389352 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d183af63-fac0-4cb9-b959-cbbbf58840af-kube-api-access-zqrwx" (OuterVolumeSpecName: "kube-api-access-zqrwx") pod "d183af63-fac0-4cb9-b959-cbbbf58840af" (UID: "d183af63-fac0-4cb9-b959-cbbbf58840af"). InnerVolumeSpecName "kube-api-access-zqrwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.456948 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" event={"ID":"09ca80fd-b29f-4eca-8ad2-28f29cb91e78","Type":"ContainerStarted","Data":"62e6d469b430fc51a0422c6ab119d270c5410e9fa4b5e31d3b73888c6ef39925"} Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.458279 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.461462 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75j4q" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.463448 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75j4q" event={"ID":"d183af63-fac0-4cb9-b959-cbbbf58840af","Type":"ContainerDied","Data":"b579e3067bfedcdca96cefbc997b7dc129c73778a2cee0ca811547eaf2c50b4f"} Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.463519 4834 scope.go:117] "RemoveContainer" containerID="217b0b455fdbe7090bd1c96cc43b801fc3695a9cb1f975e6833742f10dd57513" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.464757 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" event={"ID":"2e030c1c-2b95-4ea1-a9be-91a707b92e15","Type":"ContainerStarted","Data":"385ade31d45abab8011552e1c5e6d35e0940f289411b69da91b978dd03b00f2d"} Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.464957 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.481171 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" podStartSLOduration=1.942888415 podStartE2EDuration="14.481151495s" podCreationTimestamp="2026-01-21 14:44:08 +0000 UTC" firstStartedPulling="2026-01-21 14:44:09.385811664 +0000 UTC m=+795.360160699" lastFinishedPulling="2026-01-21 14:44:21.924074734 +0000 UTC m=+807.898423779" observedRunningTime="2026-01-21 14:44:22.476566228 +0000 UTC m=+808.450915293" watchObservedRunningTime="2026-01-21 14:44:22.481151495 +0000 UTC m=+808.455500540" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.490779 4834 scope.go:117] "RemoveContainer" containerID="1f0d4ce80e5f1c229df5fc1cdc7e5112171b0d148acdbbd18f81da420b6c6200" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.493014 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.493047 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqrwx\" (UniqueName: \"kubernetes.io/projected/d183af63-fac0-4cb9-b959-cbbbf58840af-kube-api-access-zqrwx\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.507807 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" podStartSLOduration=1.88675074 podStartE2EDuration="14.507779055s" podCreationTimestamp="2026-01-21 14:44:08 +0000 UTC" firstStartedPulling="2026-01-21 14:44:09.27839814 +0000 UTC m=+795.252747175" lastFinishedPulling="2026-01-21 14:44:21.899426445 +0000 UTC m=+807.873775490" observedRunningTime="2026-01-21 14:44:22.506592309 +0000 UTC m=+808.480941354" watchObservedRunningTime="2026-01-21 14:44:22.507779055 +0000 UTC m=+808.482128100" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.515911 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d183af63-fac0-4cb9-b959-cbbbf58840af" (UID: "d183af63-fac0-4cb9-b959-cbbbf58840af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.534792 4834 scope.go:117] "RemoveContainer" containerID="c458a479fcfaeabd98997576502f6c322ba38eddf11ddc4acef0e6b42b3c9ab4" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.594685 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d183af63-fac0-4cb9-b959-cbbbf58840af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.806816 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75j4q"] Jan 21 14:44:22 crc kubenswrapper[4834]: I0121 14:44:22.811610 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-75j4q"] Jan 21 14:44:24 crc kubenswrapper[4834]: I0121 14:44:24.331414 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" path="/var/lib/kubelet/pods/d183af63-fac0-4cb9-b959-cbbbf58840af/volumes" Jan 21 14:44:38 crc kubenswrapper[4834]: I0121 14:44:38.961816 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-794b596549-wm7g7" Jan 21 14:44:58 crc kubenswrapper[4834]: I0121 14:44:58.663896 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-9f5f6b6d-8k84x" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.489438 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp"] Jan 21 14:44:59 crc kubenswrapper[4834]: E0121 14:44:59.489882 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerName="extract-utilities" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.489911 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerName="extract-utilities" Jan 21 14:44:59 crc kubenswrapper[4834]: E0121 14:44:59.489964 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerName="registry-server" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.489975 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerName="registry-server" Jan 21 14:44:59 crc kubenswrapper[4834]: E0121 14:44:59.489988 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerName="extract-content" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.489996 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerName="extract-content" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.490201 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="d183af63-fac0-4cb9-b959-cbbbf58840af" containerName="registry-server" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.490890 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.494301 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-nmfb6"] Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.497588 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.503673 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.504789 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fxnlm" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.507559 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.508567 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.518779 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp"] Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.529755 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-frr-conf\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.529886 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4pgp\" (UniqueName: \"kubernetes.io/projected/2dc8a2ee-3729-4765-86aa-4f9b89a00c79-kube-api-access-c4pgp\") pod \"frr-k8s-webhook-server-7df86c4f6c-8g2fp\" (UID: \"2dc8a2ee-3729-4765-86aa-4f9b89a00c79\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.529955 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-reloader\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.529981 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/872f6769-1a60-42d1-911d-0db9cfba03ce-metrics-certs\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.530008 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkg6\" (UniqueName: \"kubernetes.io/projected/872f6769-1a60-42d1-911d-0db9cfba03ce-kube-api-access-7vkg6\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.530053 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-frr-sockets\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.530206 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-metrics\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.530300 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/872f6769-1a60-42d1-911d-0db9cfba03ce-frr-startup\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.530470 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dc8a2ee-3729-4765-86aa-4f9b89a00c79-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8g2fp\" (UID: \"2dc8a2ee-3729-4765-86aa-4f9b89a00c79\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.569744 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jqkhx"] Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.571231 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.579380 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.579483 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-snssz" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.579723 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.579853 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.589310 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-4qs4k"] Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.590396 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.600292 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.618368 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-4qs4k"] Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.632219 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-frr-conf\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.632637 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4pgp\" (UniqueName: \"kubernetes.io/projected/2dc8a2ee-3729-4765-86aa-4f9b89a00c79-kube-api-access-c4pgp\") pod \"frr-k8s-webhook-server-7df86c4f6c-8g2fp\" (UID: \"2dc8a2ee-3729-4765-86aa-4f9b89a00c79\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.632751 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-reloader\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.632796 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-frr-conf\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.632816 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-frr-sockets\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.632846 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/872f6769-1a60-42d1-911d-0db9cfba03ce-metrics-certs\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.632864 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vkg6\" (UniqueName: \"kubernetes.io/projected/872f6769-1a60-42d1-911d-0db9cfba03ce-kube-api-access-7vkg6\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.632901 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-metrics\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.632996 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/872f6769-1a60-42d1-911d-0db9cfba03ce-frr-startup\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: E0121 14:44:59.633018 4834 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.633058 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-reloader\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.633142 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dc8a2ee-3729-4765-86aa-4f9b89a00c79-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8g2fp\" (UID: \"2dc8a2ee-3729-4765-86aa-4f9b89a00c79\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.633347 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-frr-sockets\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.633482 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/872f6769-1a60-42d1-911d-0db9cfba03ce-metrics\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: E0121 14:44:59.633609 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/872f6769-1a60-42d1-911d-0db9cfba03ce-metrics-certs podName:872f6769-1a60-42d1-911d-0db9cfba03ce nodeName:}" failed. No retries permitted until 2026-01-21 14:45:00.133585977 +0000 UTC m=+846.107935212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/872f6769-1a60-42d1-911d-0db9cfba03ce-metrics-certs") pod "frr-k8s-nmfb6" (UID: "872f6769-1a60-42d1-911d-0db9cfba03ce") : secret "frr-k8s-certs-secret" not found Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.634153 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/872f6769-1a60-42d1-911d-0db9cfba03ce-frr-startup\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.645202 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2dc8a2ee-3729-4765-86aa-4f9b89a00c79-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8g2fp\" (UID: \"2dc8a2ee-3729-4765-86aa-4f9b89a00c79\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.658537 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vkg6\" (UniqueName: \"kubernetes.io/projected/872f6769-1a60-42d1-911d-0db9cfba03ce-kube-api-access-7vkg6\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.660729 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4pgp\" (UniqueName: \"kubernetes.io/projected/2dc8a2ee-3729-4765-86aa-4f9b89a00c79-kube-api-access-c4pgp\") pod \"frr-k8s-webhook-server-7df86c4f6c-8g2fp\" (UID: \"2dc8a2ee-3729-4765-86aa-4f9b89a00c79\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.734334 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-memberlist\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.734403 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fdcc3bcf-ed3c-42b3-aaed-02aedc639655-metrics-certs\") pod \"controller-6968d8fdc4-4qs4k\" (UID: \"fdcc3bcf-ed3c-42b3-aaed-02aedc639655\") " pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.734563 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fdcc3bcf-ed3c-42b3-aaed-02aedc639655-cert\") pod \"controller-6968d8fdc4-4qs4k\" (UID: \"fdcc3bcf-ed3c-42b3-aaed-02aedc639655\") " pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.734619 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-metrics-certs\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.734643 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fa1a9082-c606-4634-9715-1b81c9f0137f-metallb-excludel2\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.734758 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqwd\" (UniqueName: \"kubernetes.io/projected/fa1a9082-c606-4634-9715-1b81c9f0137f-kube-api-access-tkqwd\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.734850 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmngl\" (UniqueName: \"kubernetes.io/projected/fdcc3bcf-ed3c-42b3-aaed-02aedc639655-kube-api-access-nmngl\") pod \"controller-6968d8fdc4-4qs4k\" (UID: \"fdcc3bcf-ed3c-42b3-aaed-02aedc639655\") " pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.815435 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.836226 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fdcc3bcf-ed3c-42b3-aaed-02aedc639655-cert\") pod \"controller-6968d8fdc4-4qs4k\" (UID: \"fdcc3bcf-ed3c-42b3-aaed-02aedc639655\") " pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.836282 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fa1a9082-c606-4634-9715-1b81c9f0137f-metallb-excludel2\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.836312 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-metrics-certs\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.836356 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqwd\" (UniqueName: \"kubernetes.io/projected/fa1a9082-c606-4634-9715-1b81c9f0137f-kube-api-access-tkqwd\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.836410 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmngl\" (UniqueName: \"kubernetes.io/projected/fdcc3bcf-ed3c-42b3-aaed-02aedc639655-kube-api-access-nmngl\") pod \"controller-6968d8fdc4-4qs4k\" (UID: \"fdcc3bcf-ed3c-42b3-aaed-02aedc639655\") " pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.836438 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-memberlist\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.836461 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fdcc3bcf-ed3c-42b3-aaed-02aedc639655-metrics-certs\") pod \"controller-6968d8fdc4-4qs4k\" (UID: \"fdcc3bcf-ed3c-42b3-aaed-02aedc639655\") " pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: E0121 14:44:59.837123 4834 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.837195 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fa1a9082-c606-4634-9715-1b81c9f0137f-metallb-excludel2\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: E0121 14:44:59.837230 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-memberlist podName:fa1a9082-c606-4634-9715-1b81c9f0137f nodeName:}" failed. No retries permitted until 2026-01-21 14:45:00.33719062 +0000 UTC m=+846.311539665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-memberlist") pod "speaker-jqkhx" (UID: "fa1a9082-c606-4634-9715-1b81c9f0137f") : secret "metallb-memberlist" not found Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.840356 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fdcc3bcf-ed3c-42b3-aaed-02aedc639655-metrics-certs\") pod \"controller-6968d8fdc4-4qs4k\" (UID: \"fdcc3bcf-ed3c-42b3-aaed-02aedc639655\") " pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.841040 4834 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.842092 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-metrics-certs\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.851557 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fdcc3bcf-ed3c-42b3-aaed-02aedc639655-cert\") pod \"controller-6968d8fdc4-4qs4k\" (UID: \"fdcc3bcf-ed3c-42b3-aaed-02aedc639655\") " pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.858827 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmngl\" (UniqueName: \"kubernetes.io/projected/fdcc3bcf-ed3c-42b3-aaed-02aedc639655-kube-api-access-nmngl\") pod \"controller-6968d8fdc4-4qs4k\" (UID: \"fdcc3bcf-ed3c-42b3-aaed-02aedc639655\") " pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.859416 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqwd\" (UniqueName: \"kubernetes.io/projected/fa1a9082-c606-4634-9715-1b81c9f0137f-kube-api-access-tkqwd\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:44:59 crc kubenswrapper[4834]: I0121 14:44:59.910696 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.053657 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp"] Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.136453 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-4qs4k"] Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.140799 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/872f6769-1a60-42d1-911d-0db9cfba03ce-metrics-certs\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:45:00 crc kubenswrapper[4834]: W0121 14:45:00.141275 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdcc3bcf_ed3c_42b3_aaed_02aedc639655.slice/crio-cd4505b027d10a70876243dcdfabf294f66639e8e381ed553538abf649a2c119 WatchSource:0}: Error finding container cd4505b027d10a70876243dcdfabf294f66639e8e381ed553538abf649a2c119: Status 404 returned error can't find the container with id cd4505b027d10a70876243dcdfabf294f66639e8e381ed553538abf649a2c119 Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.149355 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/872f6769-1a60-42d1-911d-0db9cfba03ce-metrics-certs\") pod \"frr-k8s-nmfb6\" (UID: \"872f6769-1a60-42d1-911d-0db9cfba03ce\") " pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.155836 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx"] Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.156905 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.160113 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.160389 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.177769 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx"] Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.345004 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptfj9\" (UniqueName: \"kubernetes.io/projected/cdca5cec-a6bb-41bb-9270-2f6885e774db-kube-api-access-ptfj9\") pod \"collect-profiles-29483445-t8llx\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.345629 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdca5cec-a6bb-41bb-9270-2f6885e774db-config-volume\") pod \"collect-profiles-29483445-t8llx\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.345874 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdca5cec-a6bb-41bb-9270-2f6885e774db-secret-volume\") pod \"collect-profiles-29483445-t8llx\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.346016 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-memberlist\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:45:00 crc kubenswrapper[4834]: E0121 14:45:00.346200 4834 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 14:45:00 crc kubenswrapper[4834]: E0121 14:45:00.346293 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-memberlist podName:fa1a9082-c606-4634-9715-1b81c9f0137f nodeName:}" failed. No retries permitted until 2026-01-21 14:45:01.346269255 +0000 UTC m=+847.320618460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-memberlist") pod "speaker-jqkhx" (UID: "fa1a9082-c606-4634-9715-1b81c9f0137f") : secret "metallb-memberlist" not found Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.425141 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.447595 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptfj9\" (UniqueName: \"kubernetes.io/projected/cdca5cec-a6bb-41bb-9270-2f6885e774db-kube-api-access-ptfj9\") pod \"collect-profiles-29483445-t8llx\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.447670 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdca5cec-a6bb-41bb-9270-2f6885e774db-config-volume\") pod \"collect-profiles-29483445-t8llx\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.447717 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdca5cec-a6bb-41bb-9270-2f6885e774db-secret-volume\") pod \"collect-profiles-29483445-t8llx\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.449336 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdca5cec-a6bb-41bb-9270-2f6885e774db-config-volume\") pod \"collect-profiles-29483445-t8llx\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.455481 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdca5cec-a6bb-41bb-9270-2f6885e774db-secret-volume\") pod \"collect-profiles-29483445-t8llx\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.470140 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptfj9\" (UniqueName: \"kubernetes.io/projected/cdca5cec-a6bb-41bb-9270-2f6885e774db-kube-api-access-ptfj9\") pod \"collect-profiles-29483445-t8llx\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.485108 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.707310 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx"] Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.720019 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" event={"ID":"2dc8a2ee-3729-4765-86aa-4f9b89a00c79","Type":"ContainerStarted","Data":"e3a3de701d0a140762a93ed383ff1059c247714354dbad6bdc95b1b2f6de7d7f"} Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.721316 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4qs4k" event={"ID":"fdcc3bcf-ed3c-42b3-aaed-02aedc639655","Type":"ContainerStarted","Data":"cd4505b027d10a70876243dcdfabf294f66639e8e381ed553538abf649a2c119"} Jan 21 14:45:00 crc kubenswrapper[4834]: I0121 14:45:00.722254 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" event={"ID":"cdca5cec-a6bb-41bb-9270-2f6885e774db","Type":"ContainerStarted","Data":"efd780566cf24d78633830f00cdebb7b7f15d427103415c9f6e95ad1f1dea291"} Jan 21 14:45:01 crc kubenswrapper[4834]: I0121 14:45:01.360999 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-memberlist\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:45:01 crc kubenswrapper[4834]: I0121 14:45:01.367946 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fa1a9082-c606-4634-9715-1b81c9f0137f-memberlist\") pod \"speaker-jqkhx\" (UID: \"fa1a9082-c606-4634-9715-1b81c9f0137f\") " pod="metallb-system/speaker-jqkhx" Jan 21 14:45:01 crc kubenswrapper[4834]: I0121 14:45:01.392210 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jqkhx" Jan 21 14:45:01 crc kubenswrapper[4834]: I0121 14:45:01.730593 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4qs4k" event={"ID":"fdcc3bcf-ed3c-42b3-aaed-02aedc639655","Type":"ContainerStarted","Data":"5c973d33be57b5cee5681a30761bba21281dd1ce5dee305232915070537ed9da"} Jan 21 14:45:01 crc kubenswrapper[4834]: I0121 14:45:01.732683 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" event={"ID":"cdca5cec-a6bb-41bb-9270-2f6885e774db","Type":"ContainerStarted","Data":"475734973abf007eb3f20554d5c66cba5477774afb370779cfe7ea0a432aab21"} Jan 21 14:45:01 crc kubenswrapper[4834]: I0121 14:45:01.734319 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jqkhx" event={"ID":"fa1a9082-c606-4634-9715-1b81c9f0137f","Type":"ContainerStarted","Data":"b8f8f3479734b002dd9f37e3ace05fc5fb034718a8df0ce084d0969ef2ee52e2"} Jan 21 14:45:01 crc kubenswrapper[4834]: I0121 14:45:01.735693 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerStarted","Data":"a9d0dc2724464a74296cad1532bafc83d9ec19719eb8b36f3c5efbca8b9b097f"} Jan 21 14:45:02 crc kubenswrapper[4834]: I0121 14:45:02.765178 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jqkhx" event={"ID":"fa1a9082-c606-4634-9715-1b81c9f0137f","Type":"ContainerStarted","Data":"45a3dd766a9fb2ca4d3a16ac7784aef6dc6f0ed8275d9436dfea2596aaeeec8c"} Jan 21 14:45:02 crc kubenswrapper[4834]: I0121 14:45:02.765786 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jqkhx" Jan 21 14:45:02 crc kubenswrapper[4834]: I0121 14:45:02.765806 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jqkhx" event={"ID":"fa1a9082-c606-4634-9715-1b81c9f0137f","Type":"ContainerStarted","Data":"a0e72e3bbd69d97b260e3ba52e9f4208ca3a58248dabd506966e7c378824e0c2"} Jan 21 14:45:02 crc kubenswrapper[4834]: I0121 14:45:02.768107 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4qs4k" event={"ID":"fdcc3bcf-ed3c-42b3-aaed-02aedc639655","Type":"ContainerStarted","Data":"22018ff32b7cc8d5089323b2c50db45343a01916a02272b33f3d544e7a344440"} Jan 21 14:45:02 crc kubenswrapper[4834]: I0121 14:45:02.768422 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:45:02 crc kubenswrapper[4834]: I0121 14:45:02.770914 4834 generic.go:334] "Generic (PLEG): container finished" podID="cdca5cec-a6bb-41bb-9270-2f6885e774db" containerID="475734973abf007eb3f20554d5c66cba5477774afb370779cfe7ea0a432aab21" exitCode=0 Jan 21 14:45:02 crc kubenswrapper[4834]: I0121 14:45:02.770991 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" event={"ID":"cdca5cec-a6bb-41bb-9270-2f6885e774db","Type":"ContainerDied","Data":"475734973abf007eb3f20554d5c66cba5477774afb370779cfe7ea0a432aab21"} Jan 21 14:45:02 crc kubenswrapper[4834]: I0121 14:45:02.798611 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jqkhx" podStartSLOduration=3.7985863010000003 podStartE2EDuration="3.798586301s" podCreationTimestamp="2026-01-21 14:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:45:02.786832851 +0000 UTC m=+848.761181896" watchObservedRunningTime="2026-01-21 14:45:02.798586301 +0000 UTC m=+848.772935346" Jan 21 14:45:02 crc kubenswrapper[4834]: I0121 14:45:02.853908 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-4qs4k" podStartSLOduration=3.853875324 podStartE2EDuration="3.853875324s" podCreationTimestamp="2026-01-21 14:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:45:02.846795217 +0000 UTC m=+848.821144262" watchObservedRunningTime="2026-01-21 14:45:02.853875324 +0000 UTC m=+848.828224539" Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.050510 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.201746 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdca5cec-a6bb-41bb-9270-2f6885e774db-secret-volume\") pod \"cdca5cec-a6bb-41bb-9270-2f6885e774db\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.201942 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdca5cec-a6bb-41bb-9270-2f6885e774db-config-volume\") pod \"cdca5cec-a6bb-41bb-9270-2f6885e774db\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.201998 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptfj9\" (UniqueName: \"kubernetes.io/projected/cdca5cec-a6bb-41bb-9270-2f6885e774db-kube-api-access-ptfj9\") pod \"cdca5cec-a6bb-41bb-9270-2f6885e774db\" (UID: \"cdca5cec-a6bb-41bb-9270-2f6885e774db\") " Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.203792 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdca5cec-a6bb-41bb-9270-2f6885e774db-config-volume" (OuterVolumeSpecName: "config-volume") pod "cdca5cec-a6bb-41bb-9270-2f6885e774db" (UID: "cdca5cec-a6bb-41bb-9270-2f6885e774db"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.207465 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdca5cec-a6bb-41bb-9270-2f6885e774db-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cdca5cec-a6bb-41bb-9270-2f6885e774db" (UID: "cdca5cec-a6bb-41bb-9270-2f6885e774db"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.207816 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdca5cec-a6bb-41bb-9270-2f6885e774db-kube-api-access-ptfj9" (OuterVolumeSpecName: "kube-api-access-ptfj9") pod "cdca5cec-a6bb-41bb-9270-2f6885e774db" (UID: "cdca5cec-a6bb-41bb-9270-2f6885e774db"). InnerVolumeSpecName "kube-api-access-ptfj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.303741 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdca5cec-a6bb-41bb-9270-2f6885e774db-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.303784 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptfj9\" (UniqueName: \"kubernetes.io/projected/cdca5cec-a6bb-41bb-9270-2f6885e774db-kube-api-access-ptfj9\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.303798 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdca5cec-a6bb-41bb-9270-2f6885e774db-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.802390 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" event={"ID":"cdca5cec-a6bb-41bb-9270-2f6885e774db","Type":"ContainerDied","Data":"efd780566cf24d78633830f00cdebb7b7f15d427103415c9f6e95ad1f1dea291"} Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.802460 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efd780566cf24d78633830f00cdebb7b7f15d427103415c9f6e95ad1f1dea291" Jan 21 14:45:04 crc kubenswrapper[4834]: I0121 14:45:04.802491 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx" Jan 21 14:45:11 crc kubenswrapper[4834]: I0121 14:45:11.399869 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jqkhx" Jan 21 14:45:11 crc kubenswrapper[4834]: I0121 14:45:11.864746 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" event={"ID":"2dc8a2ee-3729-4765-86aa-4f9b89a00c79","Type":"ContainerStarted","Data":"2bc263d264da83ec6aa5527d0363366f476d8b545caa011505f8ec1b11eb5df4"} Jan 21 14:45:11 crc kubenswrapper[4834]: I0121 14:45:11.865411 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:45:11 crc kubenswrapper[4834]: I0121 14:45:11.867179 4834 generic.go:334] "Generic (PLEG): container finished" podID="872f6769-1a60-42d1-911d-0db9cfba03ce" containerID="ebd1cba51fcfb740c46aa22b02be0ac321d6f8b7eceee6c84799a7a8cf079a1c" exitCode=0 Jan 21 14:45:11 crc kubenswrapper[4834]: I0121 14:45:11.867261 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerDied","Data":"ebd1cba51fcfb740c46aa22b02be0ac321d6f8b7eceee6c84799a7a8cf079a1c"} Jan 21 14:45:11 crc kubenswrapper[4834]: I0121 14:45:11.901142 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" podStartSLOduration=1.5367083049999999 podStartE2EDuration="12.90110512s" podCreationTimestamp="2026-01-21 14:44:59 +0000 UTC" firstStartedPulling="2026-01-21 14:45:00.06879698 +0000 UTC m=+846.043146015" lastFinishedPulling="2026-01-21 14:45:11.433193785 +0000 UTC m=+857.407542830" observedRunningTime="2026-01-21 14:45:11.894666632 +0000 UTC m=+857.869015687" watchObservedRunningTime="2026-01-21 14:45:11.90110512 +0000 UTC m=+857.875454165" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.718822 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb"] Jan 21 14:45:12 crc kubenswrapper[4834]: E0121 14:45:12.719528 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdca5cec-a6bb-41bb-9270-2f6885e774db" containerName="collect-profiles" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.719546 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdca5cec-a6bb-41bb-9270-2f6885e774db" containerName="collect-profiles" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.719666 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdca5cec-a6bb-41bb-9270-2f6885e774db" containerName="collect-profiles" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.720540 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.723883 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.736844 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb"] Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.750338 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.750395 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.750442 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g28jz\" (UniqueName: \"kubernetes.io/projected/b7d7a5da-2462-4992-9837-9cb1e54ea157-kube-api-access-g28jz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.851175 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.851550 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.851679 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g28jz\" (UniqueName: \"kubernetes.io/projected/b7d7a5da-2462-4992-9837-9cb1e54ea157-kube-api-access-g28jz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.852022 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.852130 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.876022 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g28jz\" (UniqueName: \"kubernetes.io/projected/b7d7a5da-2462-4992-9837-9cb1e54ea157-kube-api-access-g28jz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.878276 4834 generic.go:334] "Generic (PLEG): container finished" podID="872f6769-1a60-42d1-911d-0db9cfba03ce" containerID="5c1734f4f8827daf2ee6a549774eed31c30b3bb152a34c095e5ea12410a97580" exitCode=0 Jan 21 14:45:12 crc kubenswrapper[4834]: I0121 14:45:12.878397 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerDied","Data":"5c1734f4f8827daf2ee6a549774eed31c30b3bb152a34c095e5ea12410a97580"} Jan 21 14:45:13 crc kubenswrapper[4834]: I0121 14:45:13.043648 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:13 crc kubenswrapper[4834]: I0121 14:45:13.606767 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb"] Jan 21 14:45:13 crc kubenswrapper[4834]: I0121 14:45:13.894186 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" event={"ID":"b7d7a5da-2462-4992-9837-9cb1e54ea157","Type":"ContainerStarted","Data":"08c534f9da0c51aff4af45a0a7e16fac159498506de47c0004eb8326a5f84888"} Jan 21 14:45:13 crc kubenswrapper[4834]: I0121 14:45:13.894255 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" event={"ID":"b7d7a5da-2462-4992-9837-9cb1e54ea157","Type":"ContainerStarted","Data":"23f4f7d0080562d729ab75b54b1de5a102c2d2cba607c40ad300e6f23c4657d5"} Jan 21 14:45:13 crc kubenswrapper[4834]: I0121 14:45:13.897611 4834 generic.go:334] "Generic (PLEG): container finished" podID="872f6769-1a60-42d1-911d-0db9cfba03ce" containerID="94b7fae5ec11f5d1a386630c6247e00e06603367321e34f00b6d1cb2ac894d57" exitCode=0 Jan 21 14:45:13 crc kubenswrapper[4834]: I0121 14:45:13.897674 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerDied","Data":"94b7fae5ec11f5d1a386630c6247e00e06603367321e34f00b6d1cb2ac894d57"} Jan 21 14:45:14 crc kubenswrapper[4834]: I0121 14:45:14.904893 4834 generic.go:334] "Generic (PLEG): container finished" podID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerID="08c534f9da0c51aff4af45a0a7e16fac159498506de47c0004eb8326a5f84888" exitCode=0 Jan 21 14:45:14 crc kubenswrapper[4834]: I0121 14:45:14.904979 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" event={"ID":"b7d7a5da-2462-4992-9837-9cb1e54ea157","Type":"ContainerDied","Data":"08c534f9da0c51aff4af45a0a7e16fac159498506de47c0004eb8326a5f84888"} Jan 21 14:45:14 crc kubenswrapper[4834]: I0121 14:45:14.909317 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerStarted","Data":"db31b2e7a7720d70bc764d7cfebfca213e2edea5453df3d99f58ff013d490e51"} Jan 21 14:45:14 crc kubenswrapper[4834]: I0121 14:45:14.909598 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerStarted","Data":"aa2603030231e37b8ff36b0078ee3d3a32fbd61d6bbe3ee82063e6f0cfbe75fe"} Jan 21 14:45:15 crc kubenswrapper[4834]: I0121 14:45:15.922370 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerStarted","Data":"114b3e2024ec4f19f8f908e6d8c93896dce54e3e1f6b6e5f6690c3eeb55cbc6d"} Jan 21 14:45:15 crc kubenswrapper[4834]: I0121 14:45:15.922733 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerStarted","Data":"b9eacd0d4aeafd933186b58504c48e570e064a3b50441b654143c7cdca387b0f"} Jan 21 14:45:16 crc kubenswrapper[4834]: I0121 14:45:16.935073 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerStarted","Data":"4dc8154731d50d315730ab1e5a309a88d14a8c6be7ef7eb082ab5d321863c7fd"} Jan 21 14:45:16 crc kubenswrapper[4834]: I0121 14:45:16.935135 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nmfb6" event={"ID":"872f6769-1a60-42d1-911d-0db9cfba03ce","Type":"ContainerStarted","Data":"1c0ccc3df819d8266a25121097a732ff3d04e7e0a6766022db7c1317a53d9fa7"} Jan 21 14:45:17 crc kubenswrapper[4834]: I0121 14:45:17.942614 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:45:17 crc kubenswrapper[4834]: I0121 14:45:17.979623 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-nmfb6" podStartSLOduration=8.617901582 podStartE2EDuration="18.97959142s" podCreationTimestamp="2026-01-21 14:44:59 +0000 UTC" firstStartedPulling="2026-01-21 14:45:01.091413757 +0000 UTC m=+847.065762812" lastFinishedPulling="2026-01-21 14:45:11.453103605 +0000 UTC m=+857.427452650" observedRunningTime="2026-01-21 14:45:17.974193434 +0000 UTC m=+863.948542489" watchObservedRunningTime="2026-01-21 14:45:17.97959142 +0000 UTC m=+863.953940495" Jan 21 14:45:19 crc kubenswrapper[4834]: I0121 14:45:19.915090 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-4qs4k" Jan 21 14:45:20 crc kubenswrapper[4834]: I0121 14:45:20.426467 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:45:20 crc kubenswrapper[4834]: I0121 14:45:20.491485 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:45:21 crc kubenswrapper[4834]: I0121 14:45:21.976802 4834 generic.go:334] "Generic (PLEG): container finished" podID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerID="40eec0d27708cc7751466849abe5e2631136bd2edc02150f29dcfb101a95a848" exitCode=0 Jan 21 14:45:21 crc kubenswrapper[4834]: I0121 14:45:21.976905 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" event={"ID":"b7d7a5da-2462-4992-9837-9cb1e54ea157","Type":"ContainerDied","Data":"40eec0d27708cc7751466849abe5e2631136bd2edc02150f29dcfb101a95a848"} Jan 21 14:45:22 crc kubenswrapper[4834]: I0121 14:45:22.987472 4834 generic.go:334] "Generic (PLEG): container finished" podID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerID="d8613630dadf96f42b677466e065d3a1b5ee00acbb4c38c9a5459fc54dc998b1" exitCode=0 Jan 21 14:45:22 crc kubenswrapper[4834]: I0121 14:45:22.987538 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" event={"ID":"b7d7a5da-2462-4992-9837-9cb1e54ea157","Type":"ContainerDied","Data":"d8613630dadf96f42b677466e065d3a1b5ee00acbb4c38c9a5459fc54dc998b1"} Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.278357 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.429811 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-bundle\") pod \"b7d7a5da-2462-4992-9837-9cb1e54ea157\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.429913 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-util\") pod \"b7d7a5da-2462-4992-9837-9cb1e54ea157\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.430009 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g28jz\" (UniqueName: \"kubernetes.io/projected/b7d7a5da-2462-4992-9837-9cb1e54ea157-kube-api-access-g28jz\") pod \"b7d7a5da-2462-4992-9837-9cb1e54ea157\" (UID: \"b7d7a5da-2462-4992-9837-9cb1e54ea157\") " Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.431728 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-bundle" (OuterVolumeSpecName: "bundle") pod "b7d7a5da-2462-4992-9837-9cb1e54ea157" (UID: "b7d7a5da-2462-4992-9837-9cb1e54ea157"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.437552 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d7a5da-2462-4992-9837-9cb1e54ea157-kube-api-access-g28jz" (OuterVolumeSpecName: "kube-api-access-g28jz") pod "b7d7a5da-2462-4992-9837-9cb1e54ea157" (UID: "b7d7a5da-2462-4992-9837-9cb1e54ea157"). InnerVolumeSpecName "kube-api-access-g28jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.443851 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-util" (OuterVolumeSpecName: "util") pod "b7d7a5da-2462-4992-9837-9cb1e54ea157" (UID: "b7d7a5da-2462-4992-9837-9cb1e54ea157"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.531882 4834 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.531977 4834 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b7d7a5da-2462-4992-9837-9cb1e54ea157-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:24 crc kubenswrapper[4834]: I0121 14:45:24.531992 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g28jz\" (UniqueName: \"kubernetes.io/projected/b7d7a5da-2462-4992-9837-9cb1e54ea157-kube-api-access-g28jz\") on node \"crc\" DevicePath \"\"" Jan 21 14:45:25 crc kubenswrapper[4834]: I0121 14:45:25.004518 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" event={"ID":"b7d7a5da-2462-4992-9837-9cb1e54ea157","Type":"ContainerDied","Data":"23f4f7d0080562d729ab75b54b1de5a102c2d2cba607c40ad300e6f23c4657d5"} Jan 21 14:45:25 crc kubenswrapper[4834]: I0121 14:45:25.004590 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f4f7d0080562d729ab75b54b1de5a102c2d2cba607c40ad300e6f23c4657d5" Jan 21 14:45:25 crc kubenswrapper[4834]: I0121 14:45:25.004593 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb" Jan 21 14:45:29 crc kubenswrapper[4834]: I0121 14:45:29.820634 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8g2fp" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.430252 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nmfb6" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.713192 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f"] Jan 21 14:45:30 crc kubenswrapper[4834]: E0121 14:45:30.713454 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerName="util" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.713467 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerName="util" Jan 21 14:45:30 crc kubenswrapper[4834]: E0121 14:45:30.713491 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerName="pull" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.713499 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerName="pull" Jan 21 14:45:30 crc kubenswrapper[4834]: E0121 14:45:30.713510 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerName="extract" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.713516 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerName="extract" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.713616 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d7a5da-2462-4992-9837-9cb1e54ea157" containerName="extract" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.714129 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.716136 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.716312 4834 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-rr8sv" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.720505 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.728532 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl4zn\" (UniqueName: \"kubernetes.io/projected/35dd921c-c86a-4cc4-9eb7-6789e8efc8b3-kube-api-access-cl4zn\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mdp7f\" (UID: \"35dd921c-c86a-4cc4-9eb7-6789e8efc8b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.728571 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/35dd921c-c86a-4cc4-9eb7-6789e8efc8b3-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mdp7f\" (UID: \"35dd921c-c86a-4cc4-9eb7-6789e8efc8b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.745141 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f"] Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.830081 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl4zn\" (UniqueName: \"kubernetes.io/projected/35dd921c-c86a-4cc4-9eb7-6789e8efc8b3-kube-api-access-cl4zn\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mdp7f\" (UID: \"35dd921c-c86a-4cc4-9eb7-6789e8efc8b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.830135 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/35dd921c-c86a-4cc4-9eb7-6789e8efc8b3-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mdp7f\" (UID: \"35dd921c-c86a-4cc4-9eb7-6789e8efc8b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.831797 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/35dd921c-c86a-4cc4-9eb7-6789e8efc8b3-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mdp7f\" (UID: \"35dd921c-c86a-4cc4-9eb7-6789e8efc8b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" Jan 21 14:45:30 crc kubenswrapper[4834]: I0121 14:45:30.864510 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl4zn\" (UniqueName: \"kubernetes.io/projected/35dd921c-c86a-4cc4-9eb7-6789e8efc8b3-kube-api-access-cl4zn\") pod \"cert-manager-operator-controller-manager-64cf6dff88-mdp7f\" (UID: \"35dd921c-c86a-4cc4-9eb7-6789e8efc8b3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" Jan 21 14:45:31 crc kubenswrapper[4834]: I0121 14:45:31.030680 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" Jan 21 14:45:31 crc kubenswrapper[4834]: I0121 14:45:31.347233 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f"] Jan 21 14:45:31 crc kubenswrapper[4834]: W0121 14:45:31.359859 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35dd921c_c86a_4cc4_9eb7_6789e8efc8b3.slice/crio-970d3a8548c2def3d6c4c79e3156b17191091eae6155c11ed13d65da8902b45a WatchSource:0}: Error finding container 970d3a8548c2def3d6c4c79e3156b17191091eae6155c11ed13d65da8902b45a: Status 404 returned error can't find the container with id 970d3a8548c2def3d6c4c79e3156b17191091eae6155c11ed13d65da8902b45a Jan 21 14:45:32 crc kubenswrapper[4834]: I0121 14:45:32.052063 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" event={"ID":"35dd921c-c86a-4cc4-9eb7-6789e8efc8b3","Type":"ContainerStarted","Data":"970d3a8548c2def3d6c4c79e3156b17191091eae6155c11ed13d65da8902b45a"} Jan 21 14:45:40 crc kubenswrapper[4834]: I0121 14:45:40.796072 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rc5rx"] Jan 21 14:45:40 crc kubenswrapper[4834]: I0121 14:45:40.798338 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:40 crc kubenswrapper[4834]: I0121 14:45:40.810158 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rc5rx"] Jan 21 14:45:40 crc kubenswrapper[4834]: I0121 14:45:40.900900 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnfxl\" (UniqueName: \"kubernetes.io/projected/005feb97-8ed1-46d5-b96a-119991252193-kube-api-access-dnfxl\") pod \"community-operators-rc5rx\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:40 crc kubenswrapper[4834]: I0121 14:45:40.901052 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-catalog-content\") pod \"community-operators-rc5rx\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:40 crc kubenswrapper[4834]: I0121 14:45:40.901083 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-utilities\") pod \"community-operators-rc5rx\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:41 crc kubenswrapper[4834]: I0121 14:45:41.002291 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnfxl\" (UniqueName: \"kubernetes.io/projected/005feb97-8ed1-46d5-b96a-119991252193-kube-api-access-dnfxl\") pod \"community-operators-rc5rx\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:41 crc kubenswrapper[4834]: I0121 14:45:41.002391 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-catalog-content\") pod \"community-operators-rc5rx\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:41 crc kubenswrapper[4834]: I0121 14:45:41.002416 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-utilities\") pod \"community-operators-rc5rx\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:41 crc kubenswrapper[4834]: I0121 14:45:41.002911 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-utilities\") pod \"community-operators-rc5rx\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:41 crc kubenswrapper[4834]: I0121 14:45:41.003157 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-catalog-content\") pod \"community-operators-rc5rx\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:41 crc kubenswrapper[4834]: I0121 14:45:41.027628 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnfxl\" (UniqueName: \"kubernetes.io/projected/005feb97-8ed1-46d5-b96a-119991252193-kube-api-access-dnfxl\") pod \"community-operators-rc5rx\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:41 crc kubenswrapper[4834]: I0121 14:45:41.140971 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:44 crc kubenswrapper[4834]: I0121 14:45:44.949147 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rc5rx"] Jan 21 14:45:45 crc kubenswrapper[4834]: I0121 14:45:45.463255 4834 generic.go:334] "Generic (PLEG): container finished" podID="005feb97-8ed1-46d5-b96a-119991252193" containerID="17055f8ceb26deaa72dd567d572e5ff90fa3f81f92af77c6610b50c0da1f27e6" exitCode=0 Jan 21 14:45:45 crc kubenswrapper[4834]: I0121 14:45:45.463390 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc5rx" event={"ID":"005feb97-8ed1-46d5-b96a-119991252193","Type":"ContainerDied","Data":"17055f8ceb26deaa72dd567d572e5ff90fa3f81f92af77c6610b50c0da1f27e6"} Jan 21 14:45:45 crc kubenswrapper[4834]: I0121 14:45:45.464254 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc5rx" event={"ID":"005feb97-8ed1-46d5-b96a-119991252193","Type":"ContainerStarted","Data":"209d3202a866ee5d1c46c1a8b50d09fa57a50ac5ac6745e0db65fcbeb0d2da86"} Jan 21 14:45:45 crc kubenswrapper[4834]: I0121 14:45:45.468069 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" event={"ID":"35dd921c-c86a-4cc4-9eb7-6789e8efc8b3","Type":"ContainerStarted","Data":"8089501c4e8feb1eabaecb415c6f0ad8433d51eab651d6d26ee252972e48c2a6"} Jan 21 14:45:45 crc kubenswrapper[4834]: I0121 14:45:45.510185 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-mdp7f" podStartSLOduration=2.060061957 podStartE2EDuration="15.510167123s" podCreationTimestamp="2026-01-21 14:45:30 +0000 UTC" firstStartedPulling="2026-01-21 14:45:31.364200773 +0000 UTC m=+877.338549818" lastFinishedPulling="2026-01-21 14:45:44.814305939 +0000 UTC m=+890.788654984" observedRunningTime="2026-01-21 14:45:45.509017787 +0000 UTC m=+891.483366842" watchObservedRunningTime="2026-01-21 14:45:45.510167123 +0000 UTC m=+891.484516178" Jan 21 14:45:46 crc kubenswrapper[4834]: I0121 14:45:46.478683 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc5rx" event={"ID":"005feb97-8ed1-46d5-b96a-119991252193","Type":"ContainerStarted","Data":"05c27384efa2a6a3554aae47699b7732826885ce07cfd7ca991cf850e89e02b9"} Jan 21 14:45:47 crc kubenswrapper[4834]: I0121 14:45:47.114107 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:45:47 crc kubenswrapper[4834]: I0121 14:45:47.114185 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:45:47 crc kubenswrapper[4834]: I0121 14:45:47.486157 4834 generic.go:334] "Generic (PLEG): container finished" podID="005feb97-8ed1-46d5-b96a-119991252193" containerID="05c27384efa2a6a3554aae47699b7732826885ce07cfd7ca991cf850e89e02b9" exitCode=0 Jan 21 14:45:47 crc kubenswrapper[4834]: I0121 14:45:47.486211 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc5rx" event={"ID":"005feb97-8ed1-46d5-b96a-119991252193","Type":"ContainerDied","Data":"05c27384efa2a6a3554aae47699b7732826885ce07cfd7ca991cf850e89e02b9"} Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.389747 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-6fw8m"] Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.391414 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.394526 4834 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lvkcm" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.396000 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.401585 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.404375 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gl5z9"] Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.406528 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.418721 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-6fw8m"] Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.439497 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gl5z9"] Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.529974 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brwmb\" (UniqueName: \"kubernetes.io/projected/6aec7cfe-f5b6-42be-a9d6-820508974509-kube-api-access-brwmb\") pod \"certified-operators-gl5z9\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.530071 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-utilities\") pod \"certified-operators-gl5z9\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.530102 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-catalog-content\") pod \"certified-operators-gl5z9\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.530122 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jl8\" (UniqueName: \"kubernetes.io/projected/9d45a000-2985-4340-a80c-533709ceed95-kube-api-access-v8jl8\") pod \"cert-manager-webhook-f4fb5df64-6fw8m\" (UID: \"9d45a000-2985-4340-a80c-533709ceed95\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.530332 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d45a000-2985-4340-a80c-533709ceed95-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-6fw8m\" (UID: \"9d45a000-2985-4340-a80c-533709ceed95\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.631848 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-utilities\") pod \"certified-operators-gl5z9\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.631911 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-catalog-content\") pod \"certified-operators-gl5z9\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.631960 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jl8\" (UniqueName: \"kubernetes.io/projected/9d45a000-2985-4340-a80c-533709ceed95-kube-api-access-v8jl8\") pod \"cert-manager-webhook-f4fb5df64-6fw8m\" (UID: \"9d45a000-2985-4340-a80c-533709ceed95\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.632022 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d45a000-2985-4340-a80c-533709ceed95-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-6fw8m\" (UID: \"9d45a000-2985-4340-a80c-533709ceed95\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.632068 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brwmb\" (UniqueName: \"kubernetes.io/projected/6aec7cfe-f5b6-42be-a9d6-820508974509-kube-api-access-brwmb\") pod \"certified-operators-gl5z9\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.633277 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-utilities\") pod \"certified-operators-gl5z9\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.633561 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-catalog-content\") pod \"certified-operators-gl5z9\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.656866 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jl8\" (UniqueName: \"kubernetes.io/projected/9d45a000-2985-4340-a80c-533709ceed95-kube-api-access-v8jl8\") pod \"cert-manager-webhook-f4fb5df64-6fw8m\" (UID: \"9d45a000-2985-4340-a80c-533709ceed95\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.657350 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d45a000-2985-4340-a80c-533709ceed95-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-6fw8m\" (UID: \"9d45a000-2985-4340-a80c-533709ceed95\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.667962 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brwmb\" (UniqueName: \"kubernetes.io/projected/6aec7cfe-f5b6-42be-a9d6-820508974509-kube-api-access-brwmb\") pod \"certified-operators-gl5z9\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.712129 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:45:48 crc kubenswrapper[4834]: I0121 14:45:48.738555 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:45:49 crc kubenswrapper[4834]: I0121 14:45:49.512502 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc5rx" event={"ID":"005feb97-8ed1-46d5-b96a-119991252193","Type":"ContainerStarted","Data":"3926dd3a3984aa356c23145d7ce6a9066ed3ceb69bf06044b7c206871d0ee581"} Jan 21 14:45:49 crc kubenswrapper[4834]: I0121 14:45:49.544511 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rc5rx" podStartSLOduration=6.615080049 podStartE2EDuration="9.544481521s" podCreationTimestamp="2026-01-21 14:45:40 +0000 UTC" firstStartedPulling="2026-01-21 14:45:45.465167425 +0000 UTC m=+891.439516470" lastFinishedPulling="2026-01-21 14:45:48.394568897 +0000 UTC m=+894.368917942" observedRunningTime="2026-01-21 14:45:49.540281692 +0000 UTC m=+895.514630737" watchObservedRunningTime="2026-01-21 14:45:49.544481521 +0000 UTC m=+895.518830576" Jan 21 14:45:49 crc kubenswrapper[4834]: I0121 14:45:49.576099 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gl5z9"] Jan 21 14:45:49 crc kubenswrapper[4834]: I0121 14:45:49.735566 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-6fw8m"] Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.536988 4834 generic.go:334] "Generic (PLEG): container finished" podID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerID="613c2b98b15111598013e46a08b154af3d87efba8e470c87ce314a5fbba090ae" exitCode=0 Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.538281 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl5z9" event={"ID":"6aec7cfe-f5b6-42be-a9d6-820508974509","Type":"ContainerDied","Data":"613c2b98b15111598013e46a08b154af3d87efba8e470c87ce314a5fbba090ae"} Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.538330 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl5z9" event={"ID":"6aec7cfe-f5b6-42be-a9d6-820508974509","Type":"ContainerStarted","Data":"e28f34abc01dacfe561be98383e79c3dffca3734537acf7f5d42c3a83e562f10"} Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.546238 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" event={"ID":"9d45a000-2985-4340-a80c-533709ceed95","Type":"ContainerStarted","Data":"74dd31a1c40eaf19425c43fc7973a37460aca3b8305cab161c3c16150548d864"} Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.591496 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-28mgw"] Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.592708 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.596221 4834 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wnwjw" Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.609338 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-28mgw"] Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.752459 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8c8bf664-44d3-4476-8b5c-1ed1f755fb6a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-28mgw\" (UID: \"8c8bf664-44d3-4476-8b5c-1ed1f755fb6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.752637 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx55m\" (UniqueName: \"kubernetes.io/projected/8c8bf664-44d3-4476-8b5c-1ed1f755fb6a-kube-api-access-rx55m\") pod \"cert-manager-cainjector-855d9ccff4-28mgw\" (UID: \"8c8bf664-44d3-4476-8b5c-1ed1f755fb6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.853618 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx55m\" (UniqueName: \"kubernetes.io/projected/8c8bf664-44d3-4476-8b5c-1ed1f755fb6a-kube-api-access-rx55m\") pod \"cert-manager-cainjector-855d9ccff4-28mgw\" (UID: \"8c8bf664-44d3-4476-8b5c-1ed1f755fb6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.853737 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8c8bf664-44d3-4476-8b5c-1ed1f755fb6a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-28mgw\" (UID: \"8c8bf664-44d3-4476-8b5c-1ed1f755fb6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.878900 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8c8bf664-44d3-4476-8b5c-1ed1f755fb6a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-28mgw\" (UID: \"8c8bf664-44d3-4476-8b5c-1ed1f755fb6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.897658 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx55m\" (UniqueName: \"kubernetes.io/projected/8c8bf664-44d3-4476-8b5c-1ed1f755fb6a-kube-api-access-rx55m\") pod \"cert-manager-cainjector-855d9ccff4-28mgw\" (UID: \"8c8bf664-44d3-4476-8b5c-1ed1f755fb6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" Jan 21 14:45:50 crc kubenswrapper[4834]: I0121 14:45:50.917607 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" Jan 21 14:45:51 crc kubenswrapper[4834]: I0121 14:45:51.141545 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:51 crc kubenswrapper[4834]: I0121 14:45:51.169751 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:45:51 crc kubenswrapper[4834]: I0121 14:45:51.618250 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-28mgw"] Jan 21 14:45:52 crc kubenswrapper[4834]: I0121 14:45:52.202526 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rc5rx" podUID="005feb97-8ed1-46d5-b96a-119991252193" containerName="registry-server" probeResult="failure" output=< Jan 21 14:45:52 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 14:45:52 crc kubenswrapper[4834]: > Jan 21 14:45:52 crc kubenswrapper[4834]: I0121 14:45:52.637575 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" event={"ID":"8c8bf664-44d3-4476-8b5c-1ed1f755fb6a","Type":"ContainerStarted","Data":"b9bae22c3ec843fc899ab77168d3540ac8d50514fdefa45697963f1d0ec1879e"} Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.219661 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-82b4f"] Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.222450 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.242196 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82b4f"] Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.392155 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-utilities\") pod \"redhat-marketplace-82b4f\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.392255 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnfmg\" (UniqueName: \"kubernetes.io/projected/ca471328-cfa8-4fdd-83c0-9bde23793baa-kube-api-access-wnfmg\") pod \"redhat-marketplace-82b4f\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.392471 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-catalog-content\") pod \"redhat-marketplace-82b4f\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.493985 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnfmg\" (UniqueName: \"kubernetes.io/projected/ca471328-cfa8-4fdd-83c0-9bde23793baa-kube-api-access-wnfmg\") pod \"redhat-marketplace-82b4f\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.494105 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-catalog-content\") pod \"redhat-marketplace-82b4f\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.494205 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-utilities\") pod \"redhat-marketplace-82b4f\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.495378 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-catalog-content\") pod \"redhat-marketplace-82b4f\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.495521 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-utilities\") pod \"redhat-marketplace-82b4f\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.519079 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnfmg\" (UniqueName: \"kubernetes.io/projected/ca471328-cfa8-4fdd-83c0-9bde23793baa-kube-api-access-wnfmg\") pod \"redhat-marketplace-82b4f\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:45:59 crc kubenswrapper[4834]: I0121 14:45:59.554008 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.195305 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.259180 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.315550 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-bq6sg"] Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.317030 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-bq6sg" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.319791 4834 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nkph5" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.330010 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-bq6sg"] Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.427606 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6b51264-73aa-41b5-ac58-8adc362ca2c5-bound-sa-token\") pod \"cert-manager-86cb77c54b-bq6sg\" (UID: \"a6b51264-73aa-41b5-ac58-8adc362ca2c5\") " pod="cert-manager/cert-manager-86cb77c54b-bq6sg" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.432508 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc9tr\" (UniqueName: \"kubernetes.io/projected/a6b51264-73aa-41b5-ac58-8adc362ca2c5-kube-api-access-wc9tr\") pod \"cert-manager-86cb77c54b-bq6sg\" (UID: \"a6b51264-73aa-41b5-ac58-8adc362ca2c5\") " pod="cert-manager/cert-manager-86cb77c54b-bq6sg" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.536213 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc9tr\" (UniqueName: \"kubernetes.io/projected/a6b51264-73aa-41b5-ac58-8adc362ca2c5-kube-api-access-wc9tr\") pod \"cert-manager-86cb77c54b-bq6sg\" (UID: \"a6b51264-73aa-41b5-ac58-8adc362ca2c5\") " pod="cert-manager/cert-manager-86cb77c54b-bq6sg" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.536371 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6b51264-73aa-41b5-ac58-8adc362ca2c5-bound-sa-token\") pod \"cert-manager-86cb77c54b-bq6sg\" (UID: \"a6b51264-73aa-41b5-ac58-8adc362ca2c5\") " pod="cert-manager/cert-manager-86cb77c54b-bq6sg" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.555333 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6b51264-73aa-41b5-ac58-8adc362ca2c5-bound-sa-token\") pod \"cert-manager-86cb77c54b-bq6sg\" (UID: \"a6b51264-73aa-41b5-ac58-8adc362ca2c5\") " pod="cert-manager/cert-manager-86cb77c54b-bq6sg" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.555528 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc9tr\" (UniqueName: \"kubernetes.io/projected/a6b51264-73aa-41b5-ac58-8adc362ca2c5-kube-api-access-wc9tr\") pod \"cert-manager-86cb77c54b-bq6sg\" (UID: \"a6b51264-73aa-41b5-ac58-8adc362ca2c5\") " pod="cert-manager/cert-manager-86cb77c54b-bq6sg" Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.595389 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rc5rx"] Jan 21 14:46:01 crc kubenswrapper[4834]: I0121 14:46:01.640480 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-bq6sg" Jan 21 14:46:02 crc kubenswrapper[4834]: I0121 14:46:02.723545 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rc5rx" podUID="005feb97-8ed1-46d5-b96a-119991252193" containerName="registry-server" containerID="cri-o://3926dd3a3984aa356c23145d7ce6a9066ed3ceb69bf06044b7c206871d0ee581" gracePeriod=2 Jan 21 14:46:05 crc kubenswrapper[4834]: I0121 14:46:05.905609 4834 generic.go:334] "Generic (PLEG): container finished" podID="005feb97-8ed1-46d5-b96a-119991252193" containerID="3926dd3a3984aa356c23145d7ce6a9066ed3ceb69bf06044b7c206871d0ee581" exitCode=0 Jan 21 14:46:05 crc kubenswrapper[4834]: I0121 14:46:05.906133 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc5rx" event={"ID":"005feb97-8ed1-46d5-b96a-119991252193","Type":"ContainerDied","Data":"3926dd3a3984aa356c23145d7ce6a9066ed3ceb69bf06044b7c206871d0ee581"} Jan 21 14:46:09 crc kubenswrapper[4834]: I0121 14:46:09.961293 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:46:09 crc kubenswrapper[4834]: I0121 14:46:09.962140 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rc5rx" event={"ID":"005feb97-8ed1-46d5-b96a-119991252193","Type":"ContainerDied","Data":"209d3202a866ee5d1c46c1a8b50d09fa57a50ac5ac6745e0db65fcbeb0d2da86"} Jan 21 14:46:09 crc kubenswrapper[4834]: I0121 14:46:09.962185 4834 scope.go:117] "RemoveContainer" containerID="3926dd3a3984aa356c23145d7ce6a9066ed3ceb69bf06044b7c206871d0ee581" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:09.993534 4834 scope.go:117] "RemoveContainer" containerID="05c27384efa2a6a3554aae47699b7732826885ce07cfd7ca991cf850e89e02b9" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.057436 4834 scope.go:117] "RemoveContainer" containerID="17055f8ceb26deaa72dd567d572e5ff90fa3f81f92af77c6610b50c0da1f27e6" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.100572 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnfxl\" (UniqueName: \"kubernetes.io/projected/005feb97-8ed1-46d5-b96a-119991252193-kube-api-access-dnfxl\") pod \"005feb97-8ed1-46d5-b96a-119991252193\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.100687 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-catalog-content\") pod \"005feb97-8ed1-46d5-b96a-119991252193\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.100751 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-utilities\") pod \"005feb97-8ed1-46d5-b96a-119991252193\" (UID: \"005feb97-8ed1-46d5-b96a-119991252193\") " Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.101961 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-utilities" (OuterVolumeSpecName: "utilities") pod "005feb97-8ed1-46d5-b96a-119991252193" (UID: "005feb97-8ed1-46d5-b96a-119991252193"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.122300 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005feb97-8ed1-46d5-b96a-119991252193-kube-api-access-dnfxl" (OuterVolumeSpecName: "kube-api-access-dnfxl") pod "005feb97-8ed1-46d5-b96a-119991252193" (UID: "005feb97-8ed1-46d5-b96a-119991252193"). InnerVolumeSpecName "kube-api-access-dnfxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.178354 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "005feb97-8ed1-46d5-b96a-119991252193" (UID: "005feb97-8ed1-46d5-b96a-119991252193"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.203121 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.203156 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnfxl\" (UniqueName: \"kubernetes.io/projected/005feb97-8ed1-46d5-b96a-119991252193-kube-api-access-dnfxl\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.203167 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005feb97-8ed1-46d5-b96a-119991252193-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.497297 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-bq6sg"] Jan 21 14:46:10 crc kubenswrapper[4834]: W0121 14:46:10.501541 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b51264_73aa_41b5_ac58_8adc362ca2c5.slice/crio-839221ff7e3aec8db47e21d01937290b7ba4965b1fb5b56fc8c8fc2cf0659ac0 WatchSource:0}: Error finding container 839221ff7e3aec8db47e21d01937290b7ba4965b1fb5b56fc8c8fc2cf0659ac0: Status 404 returned error can't find the container with id 839221ff7e3aec8db47e21d01937290b7ba4965b1fb5b56fc8c8fc2cf0659ac0 Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.507183 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82b4f"] Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.971791 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" event={"ID":"8c8bf664-44d3-4476-8b5c-1ed1f755fb6a","Type":"ContainerStarted","Data":"62e8c0de8e2040c7759ccd9e15e922040fcba7fa863d2838961c30218c649622"} Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.974662 4834 generic.go:334] "Generic (PLEG): container finished" podID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerID="c46279e154cf8ae0fe6a426749b8f8519db17801a2fb61d4142dd1ecc4c36395" exitCode=0 Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.974779 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82b4f" event={"ID":"ca471328-cfa8-4fdd-83c0-9bde23793baa","Type":"ContainerDied","Data":"c46279e154cf8ae0fe6a426749b8f8519db17801a2fb61d4142dd1ecc4c36395"} Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.974816 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82b4f" event={"ID":"ca471328-cfa8-4fdd-83c0-9bde23793baa","Type":"ContainerStarted","Data":"3640596d3d831e73f5c5106938952eff976f4fff86ce112abc1ce8ff3404548d"} Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.981583 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" event={"ID":"9d45a000-2985-4340-a80c-533709ceed95","Type":"ContainerStarted","Data":"8e52958f2f834bc1839bbd6c9cfe703693d8dd25694550af8d184b5d23df038b"} Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.981721 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.983394 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-bq6sg" event={"ID":"a6b51264-73aa-41b5-ac58-8adc362ca2c5","Type":"ContainerStarted","Data":"63d67f5026315181bc3e92f73424f834fa5c383779f94ad9396c50be44ba44d0"} Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.983440 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-bq6sg" event={"ID":"a6b51264-73aa-41b5-ac58-8adc362ca2c5","Type":"ContainerStarted","Data":"839221ff7e3aec8db47e21d01937290b7ba4965b1fb5b56fc8c8fc2cf0659ac0"} Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.985625 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rc5rx" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.990165 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-28mgw" podStartSLOduration=2.60304177 podStartE2EDuration="20.990148454s" podCreationTimestamp="2026-01-21 14:45:50 +0000 UTC" firstStartedPulling="2026-01-21 14:45:51.636941111 +0000 UTC m=+897.611290156" lastFinishedPulling="2026-01-21 14:46:10.024047795 +0000 UTC m=+915.998396840" observedRunningTime="2026-01-21 14:46:10.987443931 +0000 UTC m=+916.961792986" watchObservedRunningTime="2026-01-21 14:46:10.990148454 +0000 UTC m=+916.964497499" Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.992701 4834 generic.go:334] "Generic (PLEG): container finished" podID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerID="9ca90767a6ddac5047b3e2384991608af081c5ec5b80eca5f8c51615db8ee779" exitCode=0 Jan 21 14:46:10 crc kubenswrapper[4834]: I0121 14:46:10.992751 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl5z9" event={"ID":"6aec7cfe-f5b6-42be-a9d6-820508974509","Type":"ContainerDied","Data":"9ca90767a6ddac5047b3e2384991608af081c5ec5b80eca5f8c51615db8ee779"} Jan 21 14:46:11 crc kubenswrapper[4834]: I0121 14:46:11.008822 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-bq6sg" podStartSLOduration=10.008806342 podStartE2EDuration="10.008806342s" podCreationTimestamp="2026-01-21 14:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:46:11.004071225 +0000 UTC m=+916.978420270" watchObservedRunningTime="2026-01-21 14:46:11.008806342 +0000 UTC m=+916.983155387" Jan 21 14:46:11 crc kubenswrapper[4834]: I0121 14:46:11.042042 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" podStartSLOduration=2.767521595 podStartE2EDuration="23.040898813s" podCreationTimestamp="2026-01-21 14:45:48 +0000 UTC" firstStartedPulling="2026-01-21 14:45:49.733288571 +0000 UTC m=+895.707637616" lastFinishedPulling="2026-01-21 14:46:10.006665789 +0000 UTC m=+915.981014834" observedRunningTime="2026-01-21 14:46:11.035891739 +0000 UTC m=+917.010240784" watchObservedRunningTime="2026-01-21 14:46:11.040898813 +0000 UTC m=+917.015247858" Jan 21 14:46:11 crc kubenswrapper[4834]: I0121 14:46:11.103868 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rc5rx"] Jan 21 14:46:11 crc kubenswrapper[4834]: I0121 14:46:11.109845 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rc5rx"] Jan 21 14:46:12 crc kubenswrapper[4834]: I0121 14:46:12.007107 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl5z9" event={"ID":"6aec7cfe-f5b6-42be-a9d6-820508974509","Type":"ContainerStarted","Data":"5741177f49b3f5517c360d3d3240176951823a8d47b8542e46d75d65f0459f38"} Jan 21 14:46:12 crc kubenswrapper[4834]: I0121 14:46:12.074316 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gl5z9" podStartSLOduration=3.134467226 podStartE2EDuration="24.074285955s" podCreationTimestamp="2026-01-21 14:45:48 +0000 UTC" firstStartedPulling="2026-01-21 14:45:50.544609099 +0000 UTC m=+896.518958144" lastFinishedPulling="2026-01-21 14:46:11.484427828 +0000 UTC m=+917.458776873" observedRunningTime="2026-01-21 14:46:12.033558836 +0000 UTC m=+918.007907881" watchObservedRunningTime="2026-01-21 14:46:12.074285955 +0000 UTC m=+918.048635010" Jan 21 14:46:12 crc kubenswrapper[4834]: I0121 14:46:12.334613 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005feb97-8ed1-46d5-b96a-119991252193" path="/var/lib/kubelet/pods/005feb97-8ed1-46d5-b96a-119991252193/volumes" Jan 21 14:46:13 crc kubenswrapper[4834]: I0121 14:46:13.014611 4834 generic.go:334] "Generic (PLEG): container finished" podID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerID="bfc9fd60fb501202e99303103074f22212c515732eb07eb8fdd428b8ca542d1c" exitCode=0 Jan 21 14:46:13 crc kubenswrapper[4834]: I0121 14:46:13.014675 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82b4f" event={"ID":"ca471328-cfa8-4fdd-83c0-9bde23793baa","Type":"ContainerDied","Data":"bfc9fd60fb501202e99303103074f22212c515732eb07eb8fdd428b8ca542d1c"} Jan 21 14:46:15 crc kubenswrapper[4834]: I0121 14:46:15.031816 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82b4f" event={"ID":"ca471328-cfa8-4fdd-83c0-9bde23793baa","Type":"ContainerStarted","Data":"ee1bf7282c762c64dac67acad1c6141f32026dde1d5e9f8e0ae831f07254a573"} Jan 21 14:46:16 crc kubenswrapper[4834]: I0121 14:46:16.056709 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-82b4f" podStartSLOduration=13.452094904 podStartE2EDuration="17.056689404s" podCreationTimestamp="2026-01-21 14:45:59 +0000 UTC" firstStartedPulling="2026-01-21 14:46:10.978920997 +0000 UTC m=+916.953270042" lastFinishedPulling="2026-01-21 14:46:14.583515507 +0000 UTC m=+920.557864542" observedRunningTime="2026-01-21 14:46:16.05428593 +0000 UTC m=+922.028634975" watchObservedRunningTime="2026-01-21 14:46:16.056689404 +0000 UTC m=+922.031038449" Jan 21 14:46:17 crc kubenswrapper[4834]: I0121 14:46:17.114448 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:46:17 crc kubenswrapper[4834]: I0121 14:46:17.114826 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:46:18 crc kubenswrapper[4834]: I0121 14:46:18.715492 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-6fw8m" Jan 21 14:46:18 crc kubenswrapper[4834]: I0121 14:46:18.739306 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:46:18 crc kubenswrapper[4834]: I0121 14:46:18.739404 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:46:18 crc kubenswrapper[4834]: I0121 14:46:18.853015 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:46:19 crc kubenswrapper[4834]: I0121 14:46:19.099763 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:46:19 crc kubenswrapper[4834]: I0121 14:46:19.554694 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:46:19 crc kubenswrapper[4834]: I0121 14:46:19.555068 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:46:19 crc kubenswrapper[4834]: I0121 14:46:19.593868 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:46:20 crc kubenswrapper[4834]: I0121 14:46:20.102533 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:46:20 crc kubenswrapper[4834]: I0121 14:46:20.846085 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gl5z9"] Jan 21 14:46:21 crc kubenswrapper[4834]: I0121 14:46:21.846036 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82b4f"] Jan 21 14:46:22 crc kubenswrapper[4834]: I0121 14:46:22.073655 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gl5z9" podUID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerName="registry-server" containerID="cri-o://5741177f49b3f5517c360d3d3240176951823a8d47b8542e46d75d65f0459f38" gracePeriod=2 Jan 21 14:46:22 crc kubenswrapper[4834]: I0121 14:46:22.073950 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-82b4f" podUID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerName="registry-server" containerID="cri-o://ee1bf7282c762c64dac67acad1c6141f32026dde1d5e9f8e0ae831f07254a573" gracePeriod=2 Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.098359 4834 generic.go:334] "Generic (PLEG): container finished" podID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerID="5741177f49b3f5517c360d3d3240176951823a8d47b8542e46d75d65f0459f38" exitCode=0 Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.098718 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl5z9" event={"ID":"6aec7cfe-f5b6-42be-a9d6-820508974509","Type":"ContainerDied","Data":"5741177f49b3f5517c360d3d3240176951823a8d47b8542e46d75d65f0459f38"} Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.102300 4834 generic.go:334] "Generic (PLEG): container finished" podID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerID="ee1bf7282c762c64dac67acad1c6141f32026dde1d5e9f8e0ae831f07254a573" exitCode=0 Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.102357 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82b4f" event={"ID":"ca471328-cfa8-4fdd-83c0-9bde23793baa","Type":"ContainerDied","Data":"ee1bf7282c762c64dac67acad1c6141f32026dde1d5e9f8e0ae831f07254a573"} Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.663686 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xvgpp"] Jan 21 14:46:25 crc kubenswrapper[4834]: E0121 14:46:25.664814 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005feb97-8ed1-46d5-b96a-119991252193" containerName="extract-content" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.664838 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="005feb97-8ed1-46d5-b96a-119991252193" containerName="extract-content" Jan 21 14:46:25 crc kubenswrapper[4834]: E0121 14:46:25.664862 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005feb97-8ed1-46d5-b96a-119991252193" containerName="registry-server" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.664872 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="005feb97-8ed1-46d5-b96a-119991252193" containerName="registry-server" Jan 21 14:46:25 crc kubenswrapper[4834]: E0121 14:46:25.664914 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005feb97-8ed1-46d5-b96a-119991252193" containerName="extract-utilities" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.664926 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="005feb97-8ed1-46d5-b96a-119991252193" containerName="extract-utilities" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.665759 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="005feb97-8ed1-46d5-b96a-119991252193" containerName="registry-server" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.666820 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xvgpp" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.672047 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.672608 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.673328 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-bw5cl" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.684592 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xvgpp"] Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.849979 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jr4d\" (UniqueName: \"kubernetes.io/projected/365a1666-9c4a-4834-9ae1-e275ff9051b8-kube-api-access-4jr4d\") pod \"openstack-operator-index-xvgpp\" (UID: \"365a1666-9c4a-4834-9ae1-e275ff9051b8\") " pod="openstack-operators/openstack-operator-index-xvgpp" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.952035 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jr4d\" (UniqueName: \"kubernetes.io/projected/365a1666-9c4a-4834-9ae1-e275ff9051b8-kube-api-access-4jr4d\") pod \"openstack-operator-index-xvgpp\" (UID: \"365a1666-9c4a-4834-9ae1-e275ff9051b8\") " pod="openstack-operators/openstack-operator-index-xvgpp" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.975505 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jr4d\" (UniqueName: \"kubernetes.io/projected/365a1666-9c4a-4834-9ae1-e275ff9051b8-kube-api-access-4jr4d\") pod \"openstack-operator-index-xvgpp\" (UID: \"365a1666-9c4a-4834-9ae1-e275ff9051b8\") " pod="openstack-operators/openstack-operator-index-xvgpp" Jan 21 14:46:25 crc kubenswrapper[4834]: I0121 14:46:25.989310 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xvgpp" Jan 21 14:46:26 crc kubenswrapper[4834]: I0121 14:46:26.573729 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xvgpp"] Jan 21 14:46:26 crc kubenswrapper[4834]: W0121 14:46:26.579738 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod365a1666_9c4a_4834_9ae1_e275ff9051b8.slice/crio-5015b82098ad0929eb466f6101310b9e79a0ece49ef5c60e1a88e22538a244da WatchSource:0}: Error finding container 5015b82098ad0929eb466f6101310b9e79a0ece49ef5c60e1a88e22538a244da: Status 404 returned error can't find the container with id 5015b82098ad0929eb466f6101310b9e79a0ece49ef5c60e1a88e22538a244da Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.119277 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xvgpp" event={"ID":"365a1666-9c4a-4834-9ae1-e275ff9051b8","Type":"ContainerStarted","Data":"5015b82098ad0929eb466f6101310b9e79a0ece49ef5c60e1a88e22538a244da"} Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.735531 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.740895 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.783792 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnfmg\" (UniqueName: \"kubernetes.io/projected/ca471328-cfa8-4fdd-83c0-9bde23793baa-kube-api-access-wnfmg\") pod \"ca471328-cfa8-4fdd-83c0-9bde23793baa\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.783930 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-utilities\") pod \"ca471328-cfa8-4fdd-83c0-9bde23793baa\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.783985 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brwmb\" (UniqueName: \"kubernetes.io/projected/6aec7cfe-f5b6-42be-a9d6-820508974509-kube-api-access-brwmb\") pod \"6aec7cfe-f5b6-42be-a9d6-820508974509\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.784073 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-catalog-content\") pod \"ca471328-cfa8-4fdd-83c0-9bde23793baa\" (UID: \"ca471328-cfa8-4fdd-83c0-9bde23793baa\") " Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.784181 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-utilities\") pod \"6aec7cfe-f5b6-42be-a9d6-820508974509\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.785123 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-utilities" (OuterVolumeSpecName: "utilities") pod "6aec7cfe-f5b6-42be-a9d6-820508974509" (UID: "6aec7cfe-f5b6-42be-a9d6-820508974509"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.785220 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-catalog-content\") pod \"6aec7cfe-f5b6-42be-a9d6-820508974509\" (UID: \"6aec7cfe-f5b6-42be-a9d6-820508974509\") " Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.785759 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.786525 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-utilities" (OuterVolumeSpecName: "utilities") pod "ca471328-cfa8-4fdd-83c0-9bde23793baa" (UID: "ca471328-cfa8-4fdd-83c0-9bde23793baa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.791631 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aec7cfe-f5b6-42be-a9d6-820508974509-kube-api-access-brwmb" (OuterVolumeSpecName: "kube-api-access-brwmb") pod "6aec7cfe-f5b6-42be-a9d6-820508974509" (UID: "6aec7cfe-f5b6-42be-a9d6-820508974509"). InnerVolumeSpecName "kube-api-access-brwmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.794611 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca471328-cfa8-4fdd-83c0-9bde23793baa-kube-api-access-wnfmg" (OuterVolumeSpecName: "kube-api-access-wnfmg") pod "ca471328-cfa8-4fdd-83c0-9bde23793baa" (UID: "ca471328-cfa8-4fdd-83c0-9bde23793baa"). InnerVolumeSpecName "kube-api-access-wnfmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.818970 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca471328-cfa8-4fdd-83c0-9bde23793baa" (UID: "ca471328-cfa8-4fdd-83c0-9bde23793baa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.843758 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aec7cfe-f5b6-42be-a9d6-820508974509" (UID: "6aec7cfe-f5b6-42be-a9d6-820508974509"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.887328 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.887382 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brwmb\" (UniqueName: \"kubernetes.io/projected/6aec7cfe-f5b6-42be-a9d6-820508974509-kube-api-access-brwmb\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.887398 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca471328-cfa8-4fdd-83c0-9bde23793baa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.887413 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aec7cfe-f5b6-42be-a9d6-820508974509-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:27 crc kubenswrapper[4834]: I0121 14:46:27.887425 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnfmg\" (UniqueName: \"kubernetes.io/projected/ca471328-cfa8-4fdd-83c0-9bde23793baa-kube-api-access-wnfmg\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.126234 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xvgpp" event={"ID":"365a1666-9c4a-4834-9ae1-e275ff9051b8","Type":"ContainerStarted","Data":"2b4e1d6c1152a4e70649d8ecbcce39af2051b0aa801016bb04218ceeaebc4bc7"} Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.128318 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gl5z9" event={"ID":"6aec7cfe-f5b6-42be-a9d6-820508974509","Type":"ContainerDied","Data":"e28f34abc01dacfe561be98383e79c3dffca3734537acf7f5d42c3a83e562f10"} Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.128365 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gl5z9" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.128378 4834 scope.go:117] "RemoveContainer" containerID="5741177f49b3f5517c360d3d3240176951823a8d47b8542e46d75d65f0459f38" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.131143 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82b4f" event={"ID":"ca471328-cfa8-4fdd-83c0-9bde23793baa","Type":"ContainerDied","Data":"3640596d3d831e73f5c5106938952eff976f4fff86ce112abc1ce8ff3404548d"} Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.131192 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82b4f" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.144875 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xvgpp" podStartSLOduration=1.986120144 podStartE2EDuration="3.144857019s" podCreationTimestamp="2026-01-21 14:46:25 +0000 UTC" firstStartedPulling="2026-01-21 14:46:26.581445761 +0000 UTC m=+932.555794806" lastFinishedPulling="2026-01-21 14:46:27.740182636 +0000 UTC m=+933.714531681" observedRunningTime="2026-01-21 14:46:28.140014758 +0000 UTC m=+934.114363803" watchObservedRunningTime="2026-01-21 14:46:28.144857019 +0000 UTC m=+934.119206064" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.145419 4834 scope.go:117] "RemoveContainer" containerID="9ca90767a6ddac5047b3e2384991608af081c5ec5b80eca5f8c51615db8ee779" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.170842 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gl5z9"] Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.179623 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gl5z9"] Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.192597 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82b4f"] Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.193272 4834 scope.go:117] "RemoveContainer" containerID="613c2b98b15111598013e46a08b154af3d87efba8e470c87ce314a5fbba090ae" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.197419 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-82b4f"] Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.207341 4834 scope.go:117] "RemoveContainer" containerID="ee1bf7282c762c64dac67acad1c6141f32026dde1d5e9f8e0ae831f07254a573" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.223288 4834 scope.go:117] "RemoveContainer" containerID="bfc9fd60fb501202e99303103074f22212c515732eb07eb8fdd428b8ca542d1c" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.242022 4834 scope.go:117] "RemoveContainer" containerID="c46279e154cf8ae0fe6a426749b8f8519db17801a2fb61d4142dd1ecc4c36395" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.341803 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aec7cfe-f5b6-42be-a9d6-820508974509" path="/var/lib/kubelet/pods/6aec7cfe-f5b6-42be-a9d6-820508974509/volumes" Jan 21 14:46:28 crc kubenswrapper[4834]: I0121 14:46:28.342769 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca471328-cfa8-4fdd-83c0-9bde23793baa" path="/var/lib/kubelet/pods/ca471328-cfa8-4fdd-83c0-9bde23793baa/volumes" Jan 21 14:46:35 crc kubenswrapper[4834]: I0121 14:46:35.990355 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xvgpp" Jan 21 14:46:35 crc kubenswrapper[4834]: I0121 14:46:35.990987 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xvgpp" Jan 21 14:46:36 crc kubenswrapper[4834]: I0121 14:46:36.023902 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xvgpp" Jan 21 14:46:36 crc kubenswrapper[4834]: I0121 14:46:36.221263 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xvgpp" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.700148 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv"] Jan 21 14:46:38 crc kubenswrapper[4834]: E0121 14:46:38.700886 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerName="registry-server" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.700910 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerName="registry-server" Jan 21 14:46:38 crc kubenswrapper[4834]: E0121 14:46:38.700961 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerName="extract-utilities" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.700974 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerName="extract-utilities" Jan 21 14:46:38 crc kubenswrapper[4834]: E0121 14:46:38.700987 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerName="registry-server" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.701000 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerName="registry-server" Jan 21 14:46:38 crc kubenswrapper[4834]: E0121 14:46:38.701016 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerName="extract-utilities" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.701027 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerName="extract-utilities" Jan 21 14:46:38 crc kubenswrapper[4834]: E0121 14:46:38.701054 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerName="extract-content" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.701067 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerName="extract-content" Jan 21 14:46:38 crc kubenswrapper[4834]: E0121 14:46:38.701089 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerName="extract-content" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.701102 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerName="extract-content" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.701295 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca471328-cfa8-4fdd-83c0-9bde23793baa" containerName="registry-server" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.701319 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aec7cfe-f5b6-42be-a9d6-820508974509" containerName="registry-server" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.702520 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.704735 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-csz5h" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.716062 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv"] Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.850034 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.850144 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78c2b\" (UniqueName: \"kubernetes.io/projected/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-kube-api-access-78c2b\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.850238 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.952332 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78c2b\" (UniqueName: \"kubernetes.io/projected/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-kube-api-access-78c2b\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.952910 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.953133 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.953832 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-util\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.953853 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-bundle\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:38 crc kubenswrapper[4834]: I0121 14:46:38.975070 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78c2b\" (UniqueName: \"kubernetes.io/projected/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-kube-api-access-78c2b\") pod \"7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:39 crc kubenswrapper[4834]: I0121 14:46:39.063010 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:39 crc kubenswrapper[4834]: I0121 14:46:39.284648 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv"] Jan 21 14:46:40 crc kubenswrapper[4834]: I0121 14:46:40.223740 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" event={"ID":"f23fb8fd-32c6-45d6-a0d8-97555eccacc6","Type":"ContainerStarted","Data":"9f6c6b2280fdb2c39367fb58524a745b7d4c7deb9bfc495362f7bdf4d416e02f"} Jan 21 14:46:41 crc kubenswrapper[4834]: I0121 14:46:41.231112 4834 generic.go:334] "Generic (PLEG): container finished" podID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerID="00bbff3b35963a875f673cab64f1adc9cb1095045f9075556af7371ff6528fc3" exitCode=0 Jan 21 14:46:41 crc kubenswrapper[4834]: I0121 14:46:41.231166 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" event={"ID":"f23fb8fd-32c6-45d6-a0d8-97555eccacc6","Type":"ContainerDied","Data":"00bbff3b35963a875f673cab64f1adc9cb1095045f9075556af7371ff6528fc3"} Jan 21 14:46:45 crc kubenswrapper[4834]: I0121 14:46:45.270442 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" event={"ID":"f23fb8fd-32c6-45d6-a0d8-97555eccacc6","Type":"ContainerStarted","Data":"ee4342f60c08447090ded038f72bfbffe29ea674ac4cf0ca18775a12c6a63afb"} Jan 21 14:46:46 crc kubenswrapper[4834]: I0121 14:46:46.280604 4834 generic.go:334] "Generic (PLEG): container finished" podID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerID="ee4342f60c08447090ded038f72bfbffe29ea674ac4cf0ca18775a12c6a63afb" exitCode=0 Jan 21 14:46:46 crc kubenswrapper[4834]: I0121 14:46:46.280652 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" event={"ID":"f23fb8fd-32c6-45d6-a0d8-97555eccacc6","Type":"ContainerDied","Data":"ee4342f60c08447090ded038f72bfbffe29ea674ac4cf0ca18775a12c6a63afb"} Jan 21 14:46:47 crc kubenswrapper[4834]: I0121 14:46:47.113563 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:46:47 crc kubenswrapper[4834]: I0121 14:46:47.113891 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:46:47 crc kubenswrapper[4834]: I0121 14:46:47.113963 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:46:47 crc kubenswrapper[4834]: I0121 14:46:47.114621 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6a0e2c89db9c973dfdd15d51e7113160968bb3b5a4f9316daef39ec270ba9ad"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:46:47 crc kubenswrapper[4834]: I0121 14:46:47.114687 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://c6a0e2c89db9c973dfdd15d51e7113160968bb3b5a4f9316daef39ec270ba9ad" gracePeriod=600 Jan 21 14:46:48 crc kubenswrapper[4834]: I0121 14:46:48.297327 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="c6a0e2c89db9c973dfdd15d51e7113160968bb3b5a4f9316daef39ec270ba9ad" exitCode=0 Jan 21 14:46:48 crc kubenswrapper[4834]: I0121 14:46:48.297364 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"c6a0e2c89db9c973dfdd15d51e7113160968bb3b5a4f9316daef39ec270ba9ad"} Jan 21 14:46:48 crc kubenswrapper[4834]: I0121 14:46:48.298107 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"c24a029b19e48e3cf137efc10e8a62368e95d300f53e096d99dddcd0c8a5d0a8"} Jan 21 14:46:48 crc kubenswrapper[4834]: I0121 14:46:48.298129 4834 scope.go:117] "RemoveContainer" containerID="ceefd8d5119722787eeefef2e7cfbf9c09721033eaf242427e38e33ba55b1fc5" Jan 21 14:46:48 crc kubenswrapper[4834]: I0121 14:46:48.301370 4834 generic.go:334] "Generic (PLEG): container finished" podID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerID="1a5f0b0d326727cf22bfa5b69643ecf36c4f77018337c7d1f10af4feca625320" exitCode=0 Jan 21 14:46:48 crc kubenswrapper[4834]: I0121 14:46:48.301403 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" event={"ID":"f23fb8fd-32c6-45d6-a0d8-97555eccacc6","Type":"ContainerDied","Data":"1a5f0b0d326727cf22bfa5b69643ecf36c4f77018337c7d1f10af4feca625320"} Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.555154 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.705034 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-bundle\") pod \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.705149 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78c2b\" (UniqueName: \"kubernetes.io/projected/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-kube-api-access-78c2b\") pod \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.705323 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-util\") pod \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\" (UID: \"f23fb8fd-32c6-45d6-a0d8-97555eccacc6\") " Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.706386 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-bundle" (OuterVolumeSpecName: "bundle") pod "f23fb8fd-32c6-45d6-a0d8-97555eccacc6" (UID: "f23fb8fd-32c6-45d6-a0d8-97555eccacc6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.711911 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-kube-api-access-78c2b" (OuterVolumeSpecName: "kube-api-access-78c2b") pod "f23fb8fd-32c6-45d6-a0d8-97555eccacc6" (UID: "f23fb8fd-32c6-45d6-a0d8-97555eccacc6"). InnerVolumeSpecName "kube-api-access-78c2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.719829 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-util" (OuterVolumeSpecName: "util") pod "f23fb8fd-32c6-45d6-a0d8-97555eccacc6" (UID: "f23fb8fd-32c6-45d6-a0d8-97555eccacc6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.806662 4834 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-util\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.806710 4834 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:49 crc kubenswrapper[4834]: I0121 14:46:49.806723 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78c2b\" (UniqueName: \"kubernetes.io/projected/f23fb8fd-32c6-45d6-a0d8-97555eccacc6-kube-api-access-78c2b\") on node \"crc\" DevicePath \"\"" Jan 21 14:46:50 crc kubenswrapper[4834]: I0121 14:46:50.319257 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" event={"ID":"f23fb8fd-32c6-45d6-a0d8-97555eccacc6","Type":"ContainerDied","Data":"9f6c6b2280fdb2c39367fb58524a745b7d4c7deb9bfc495362f7bdf4d416e02f"} Jan 21 14:46:50 crc kubenswrapper[4834]: I0121 14:46:50.319296 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f6c6b2280fdb2c39367fb58524a745b7d4c7deb9bfc495362f7bdf4d416e02f" Jan 21 14:46:50 crc kubenswrapper[4834]: I0121 14:46:50.319359 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.273845 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt"] Jan 21 14:46:57 crc kubenswrapper[4834]: E0121 14:46:57.274678 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerName="extract" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.274691 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerName="extract" Jan 21 14:46:57 crc kubenswrapper[4834]: E0121 14:46:57.274706 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerName="pull" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.274712 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerName="pull" Jan 21 14:46:57 crc kubenswrapper[4834]: E0121 14:46:57.274733 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerName="util" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.274739 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerName="util" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.274848 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23fb8fd-32c6-45d6-a0d8-97555eccacc6" containerName="extract" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.275326 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.278070 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-r2qz7" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.318911 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt"] Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.407675 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbbg2\" (UniqueName: \"kubernetes.io/projected/c4e147a8-154c-4e8d-ad04-4a58a81b4942-kube-api-access-kbbg2\") pod \"openstack-operator-controller-init-6d4d7d8545-pr4wt\" (UID: \"c4e147a8-154c-4e8d-ad04-4a58a81b4942\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.508858 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbbg2\" (UniqueName: \"kubernetes.io/projected/c4e147a8-154c-4e8d-ad04-4a58a81b4942-kube-api-access-kbbg2\") pod \"openstack-operator-controller-init-6d4d7d8545-pr4wt\" (UID: \"c4e147a8-154c-4e8d-ad04-4a58a81b4942\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.531096 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbbg2\" (UniqueName: \"kubernetes.io/projected/c4e147a8-154c-4e8d-ad04-4a58a81b4942-kube-api-access-kbbg2\") pod \"openstack-operator-controller-init-6d4d7d8545-pr4wt\" (UID: \"c4e147a8-154c-4e8d-ad04-4a58a81b4942\") " pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.593117 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" Jan 21 14:46:57 crc kubenswrapper[4834]: I0121 14:46:57.855301 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt"] Jan 21 14:46:57 crc kubenswrapper[4834]: W0121 14:46:57.864945 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4e147a8_154c_4e8d_ad04_4a58a81b4942.slice/crio-cb2c66cefcc04cef75e405a517fbf16bfb0550fc3036a0405abafa19a62f4105 WatchSource:0}: Error finding container cb2c66cefcc04cef75e405a517fbf16bfb0550fc3036a0405abafa19a62f4105: Status 404 returned error can't find the container with id cb2c66cefcc04cef75e405a517fbf16bfb0550fc3036a0405abafa19a62f4105 Jan 21 14:46:58 crc kubenswrapper[4834]: I0121 14:46:58.378321 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" event={"ID":"c4e147a8-154c-4e8d-ad04-4a58a81b4942","Type":"ContainerStarted","Data":"cb2c66cefcc04cef75e405a517fbf16bfb0550fc3036a0405abafa19a62f4105"} Jan 21 14:47:08 crc kubenswrapper[4834]: I0121 14:47:08.537136 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" event={"ID":"c4e147a8-154c-4e8d-ad04-4a58a81b4942","Type":"ContainerStarted","Data":"65c75c336216897d898e1843abda466d1a73c652bed44baf65851788ec0a8cdd"} Jan 21 14:47:08 crc kubenswrapper[4834]: I0121 14:47:08.538963 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" Jan 21 14:47:08 crc kubenswrapper[4834]: I0121 14:47:08.575327 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" podStartSLOduration=1.230785373 podStartE2EDuration="11.575305766s" podCreationTimestamp="2026-01-21 14:46:57 +0000 UTC" firstStartedPulling="2026-01-21 14:46:57.868277374 +0000 UTC m=+963.842626419" lastFinishedPulling="2026-01-21 14:47:08.212797757 +0000 UTC m=+974.187146812" observedRunningTime="2026-01-21 14:47:08.568611289 +0000 UTC m=+974.542960344" watchObservedRunningTime="2026-01-21 14:47:08.575305766 +0000 UTC m=+974.549654821" Jan 21 14:47:17 crc kubenswrapper[4834]: I0121 14:47:17.598203 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6d4d7d8545-pr4wt" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.822702 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86"] Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.824358 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.826790 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-f4c9m" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.831191 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x"] Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.831988 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.833808 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xdvrk" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.844005 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x"] Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.895468 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpnjm\" (UniqueName: \"kubernetes.io/projected/ac1f15b1-9c34-49e4-957a-74a950b6583f-kube-api-access-fpnjm\") pod \"barbican-operator-controller-manager-7ddb5c749-v7r86\" (UID: \"ac1f15b1-9c34-49e4-957a-74a950b6583f\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.923466 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5"] Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.924434 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.929786 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-x56p7"] Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.930888 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.946953 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-4nfsn" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.947750 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-9ffzk" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.951006 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-x56p7"] Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.971382 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5"] Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.987844 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64"] Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.988958 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.996712 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-sxhkn" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.998203 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmhb\" (UniqueName: \"kubernetes.io/projected/00cc8ba5-f3f5-42e9-a23a-6c3b1989763b-kube-api-access-rqmhb\") pod \"cinder-operator-controller-manager-9b68f5989-qdt7x\" (UID: \"00cc8ba5-f3f5-42e9-a23a-6c3b1989763b\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" Jan 21 14:47:43 crc kubenswrapper[4834]: I0121 14:47:43.998268 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpnjm\" (UniqueName: \"kubernetes.io/projected/ac1f15b1-9c34-49e4-957a-74a950b6583f-kube-api-access-fpnjm\") pod \"barbican-operator-controller-manager-7ddb5c749-v7r86\" (UID: \"ac1f15b1-9c34-49e4-957a-74a950b6583f\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.000205 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.017008 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.030791 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.031941 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.037232 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-cmb9d" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.051050 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.055856 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpnjm\" (UniqueName: \"kubernetes.io/projected/ac1f15b1-9c34-49e4-957a-74a950b6583f-kube-api-access-fpnjm\") pod \"barbican-operator-controller-manager-7ddb5c749-v7r86\" (UID: \"ac1f15b1-9c34-49e4-957a-74a950b6583f\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.061079 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.062174 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.064444 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.064806 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nfz46" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.087006 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.087922 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.096336 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.100805 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pj7w\" (UniqueName: \"kubernetes.io/projected/dded3929-2919-4903-8465-da99004a3cd6-kube-api-access-6pj7w\") pod \"glance-operator-controller-manager-c6994669c-x56p7\" (UID: \"dded3929-2919-4903-8465-da99004a3cd6\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.100869 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hswg4\" (UniqueName: \"kubernetes.io/projected/2bf8d583-f505-436a-a60f-ec418f4d5e94-kube-api-access-hswg4\") pod \"designate-operator-controller-manager-9f958b845-dfmb5\" (UID: \"2bf8d583-f505-436a-a60f-ec418f4d5e94\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.100893 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqmhb\" (UniqueName: \"kubernetes.io/projected/00cc8ba5-f3f5-42e9-a23a-6c3b1989763b-kube-api-access-rqmhb\") pod \"cinder-operator-controller-manager-9b68f5989-qdt7x\" (UID: \"00cc8ba5-f3f5-42e9-a23a-6c3b1989763b\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.100947 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.100989 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5fz9\" (UniqueName: \"kubernetes.io/projected/ae8ff7f9-4d1b-4562-a307-f9ad95966c48-kube-api-access-x5fz9\") pod \"heat-operator-controller-manager-594c8c9d5d-2dj64\" (UID: \"ae8ff7f9-4d1b-4562-a307-f9ad95966c48\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.101007 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp9x5\" (UniqueName: \"kubernetes.io/projected/c181377f-ebc2-4ebf-ba81-71d6609d37c4-kube-api-access-xp9x5\") pod \"horizon-operator-controller-manager-77d5c5b54f-jvfvr\" (UID: \"c181377f-ebc2-4ebf-ba81-71d6609d37c4\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.101036 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jxj\" (UniqueName: \"kubernetes.io/projected/4d010ccc-9dc9-4d66-9544-354bc82380ca-kube-api-access-h2jxj\") pod \"ironic-operator-controller-manager-78757b4889-d5jml\" (UID: \"4d010ccc-9dc9-4d66-9544-354bc82380ca\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.101060 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m494\" (UniqueName: \"kubernetes.io/projected/8b4d1ded-fcb3-456e-9704-776079ec120f-kube-api-access-4m494\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.106990 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.112644 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-x2jlg" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.126730 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.127795 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.129162 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqmhb\" (UniqueName: \"kubernetes.io/projected/00cc8ba5-f3f5-42e9-a23a-6c3b1989763b-kube-api-access-rqmhb\") pod \"cinder-operator-controller-manager-9b68f5989-qdt7x\" (UID: \"00cc8ba5-f3f5-42e9-a23a-6c3b1989763b\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.131232 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-z2r4g" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.136675 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.137576 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.142625 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.143653 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dk76s" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.146302 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.152130 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.160597 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.163568 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.164752 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.174265 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-k8xlt" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.178992 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.179740 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.181761 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ffq4q" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.199583 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.202786 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pj7w\" (UniqueName: \"kubernetes.io/projected/dded3929-2919-4903-8465-da99004a3cd6-kube-api-access-6pj7w\") pod \"glance-operator-controller-manager-c6994669c-x56p7\" (UID: \"dded3929-2919-4903-8465-da99004a3cd6\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.202864 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hswg4\" (UniqueName: \"kubernetes.io/projected/2bf8d583-f505-436a-a60f-ec418f4d5e94-kube-api-access-hswg4\") pod \"designate-operator-controller-manager-9f958b845-dfmb5\" (UID: \"2bf8d583-f505-436a-a60f-ec418f4d5e94\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.202904 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.202957 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5fz9\" (UniqueName: \"kubernetes.io/projected/ae8ff7f9-4d1b-4562-a307-f9ad95966c48-kube-api-access-x5fz9\") pod \"heat-operator-controller-manager-594c8c9d5d-2dj64\" (UID: \"ae8ff7f9-4d1b-4562-a307-f9ad95966c48\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.202983 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp9x5\" (UniqueName: \"kubernetes.io/projected/c181377f-ebc2-4ebf-ba81-71d6609d37c4-kube-api-access-xp9x5\") pod \"horizon-operator-controller-manager-77d5c5b54f-jvfvr\" (UID: \"c181377f-ebc2-4ebf-ba81-71d6609d37c4\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.203016 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2jxj\" (UniqueName: \"kubernetes.io/projected/4d010ccc-9dc9-4d66-9544-354bc82380ca-kube-api-access-h2jxj\") pod \"ironic-operator-controller-manager-78757b4889-d5jml\" (UID: \"4d010ccc-9dc9-4d66-9544-354bc82380ca\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.203038 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m494\" (UniqueName: \"kubernetes.io/projected/8b4d1ded-fcb3-456e-9704-776079ec120f-kube-api-access-4m494\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:44 crc kubenswrapper[4834]: E0121 14:47:44.204311 4834 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:44 crc kubenswrapper[4834]: E0121 14:47:44.204373 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert podName:8b4d1ded-fcb3-456e-9704-776079ec120f nodeName:}" failed. No retries permitted until 2026-01-21 14:47:44.704350746 +0000 UTC m=+1010.678699791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert") pod "infra-operator-controller-manager-77c48c7859-bwn92" (UID: "8b4d1ded-fcb3-456e-9704-776079ec120f") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.208069 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.209143 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.216244 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bzr5n" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.238524 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.240693 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m494\" (UniqueName: \"kubernetes.io/projected/8b4d1ded-fcb3-456e-9704-776079ec120f-kube-api-access-4m494\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.241664 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2jxj\" (UniqueName: \"kubernetes.io/projected/4d010ccc-9dc9-4d66-9544-354bc82380ca-kube-api-access-h2jxj\") pod \"ironic-operator-controller-manager-78757b4889-d5jml\" (UID: \"4d010ccc-9dc9-4d66-9544-354bc82380ca\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.243318 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hswg4\" (UniqueName: \"kubernetes.io/projected/2bf8d583-f505-436a-a60f-ec418f4d5e94-kube-api-access-hswg4\") pod \"designate-operator-controller-manager-9f958b845-dfmb5\" (UID: \"2bf8d583-f505-436a-a60f-ec418f4d5e94\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.245076 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pj7w\" (UniqueName: \"kubernetes.io/projected/dded3929-2919-4903-8465-da99004a3cd6-kube-api-access-6pj7w\") pod \"glance-operator-controller-manager-c6994669c-x56p7\" (UID: \"dded3929-2919-4903-8465-da99004a3cd6\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.245605 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5fz9\" (UniqueName: \"kubernetes.io/projected/ae8ff7f9-4d1b-4562-a307-f9ad95966c48-kube-api-access-x5fz9\") pod \"heat-operator-controller-manager-594c8c9d5d-2dj64\" (UID: \"ae8ff7f9-4d1b-4562-a307-f9ad95966c48\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.248900 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp9x5\" (UniqueName: \"kubernetes.io/projected/c181377f-ebc2-4ebf-ba81-71d6609d37c4-kube-api-access-xp9x5\") pod \"horizon-operator-controller-manager-77d5c5b54f-jvfvr\" (UID: \"c181377f-ebc2-4ebf-ba81-71d6609d37c4\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.261346 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.278973 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.283710 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.293317 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.294350 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.309642 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4n5vw" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.310670 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7tqd\" (UniqueName: \"kubernetes.io/projected/ff5898ae-a2fa-4ed4-9e56-74a5476ca185-kube-api-access-x7tqd\") pod \"octavia-operator-controller-manager-7fc9b76cf6-whz74\" (UID: \"ff5898ae-a2fa-4ed4-9e56-74a5476ca185\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.310709 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmxbn\" (UniqueName: \"kubernetes.io/projected/d2a8e7a2-c785-48e3-a143-30c89c49fe36-kube-api-access-kmxbn\") pod \"neutron-operator-controller-manager-cb4666565-nsvnj\" (UID: \"d2a8e7a2-c785-48e3-a143-30c89c49fe36\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.310743 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lt4q\" (UniqueName: \"kubernetes.io/projected/f3592517-3af0-41fd-bd16-da41fb583656-kube-api-access-5lt4q\") pod \"manila-operator-controller-manager-864f6b75bf-qmndz\" (UID: \"f3592517-3af0-41fd-bd16-da41fb583656\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.310784 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hhn\" (UniqueName: \"kubernetes.io/projected/aa317ccb-4317-40f7-8661-20a62f36dd97-kube-api-access-75hhn\") pod \"keystone-operator-controller-manager-767fdc4f47-lczqr\" (UID: \"aa317ccb-4317-40f7-8661-20a62f36dd97\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.310845 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqhn\" (UniqueName: \"kubernetes.io/projected/98485c62-7908-4ca9-8436-71dd52f371df-kube-api-access-gpqhn\") pod \"mariadb-operator-controller-manager-c87fff755-bdrss\" (UID: \"98485c62-7908-4ca9-8436-71dd52f371df\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.310876 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb644\" (UniqueName: \"kubernetes.io/projected/8728d4cf-ef2f-4e71-875f-6227ac7117db-kube-api-access-bb644\") pod \"nova-operator-controller-manager-65849867d6-7ww2d\" (UID: \"8728d4cf-ef2f-4e71-875f-6227ac7117db\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.311136 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.426245 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.427995 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.428023 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.438325 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.446332 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.452687 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lt4q\" (UniqueName: \"kubernetes.io/projected/f3592517-3af0-41fd-bd16-da41fb583656-kube-api-access-5lt4q\") pod \"manila-operator-controller-manager-864f6b75bf-qmndz\" (UID: \"f3592517-3af0-41fd-bd16-da41fb583656\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.453015 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75hhn\" (UniqueName: \"kubernetes.io/projected/aa317ccb-4317-40f7-8661-20a62f36dd97-kube-api-access-75hhn\") pod \"keystone-operator-controller-manager-767fdc4f47-lczqr\" (UID: \"aa317ccb-4317-40f7-8661-20a62f36dd97\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.453386 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqhn\" (UniqueName: \"kubernetes.io/projected/98485c62-7908-4ca9-8436-71dd52f371df-kube-api-access-gpqhn\") pod \"mariadb-operator-controller-manager-c87fff755-bdrss\" (UID: \"98485c62-7908-4ca9-8436-71dd52f371df\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.453427 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb644\" (UniqueName: \"kubernetes.io/projected/8728d4cf-ef2f-4e71-875f-6227ac7117db-kube-api-access-bb644\") pod \"nova-operator-controller-manager-65849867d6-7ww2d\" (UID: \"8728d4cf-ef2f-4e71-875f-6227ac7117db\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.453782 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7tqd\" (UniqueName: \"kubernetes.io/projected/ff5898ae-a2fa-4ed4-9e56-74a5476ca185-kube-api-access-x7tqd\") pod \"octavia-operator-controller-manager-7fc9b76cf6-whz74\" (UID: \"ff5898ae-a2fa-4ed4-9e56-74a5476ca185\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.454033 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmxbn\" (UniqueName: \"kubernetes.io/projected/d2a8e7a2-c785-48e3-a143-30c89c49fe36-kube-api-access-kmxbn\") pod \"neutron-operator-controller-manager-cb4666565-nsvnj\" (UID: \"d2a8e7a2-c785-48e3-a143-30c89c49fe36\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.457504 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.459516 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zzljs" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.460204 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.468767 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pr2ml" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.490264 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.616576 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.620025 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.629466 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqhn\" (UniqueName: \"kubernetes.io/projected/98485c62-7908-4ca9-8436-71dd52f371df-kube-api-access-gpqhn\") pod \"mariadb-operator-controller-manager-c87fff755-bdrss\" (UID: \"98485c62-7908-4ca9-8436-71dd52f371df\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.640306 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9m5v\" (UniqueName: \"kubernetes.io/projected/2097f21e-3f3a-435e-9003-15846c98efbd-kube-api-access-j9m5v\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.640407 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.656106 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb644\" (UniqueName: \"kubernetes.io/projected/8728d4cf-ef2f-4e71-875f-6227ac7117db-kube-api-access-bb644\") pod \"nova-operator-controller-manager-65849867d6-7ww2d\" (UID: \"8728d4cf-ef2f-4e71-875f-6227ac7117db\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.677729 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.744098 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9m5v\" (UniqueName: \"kubernetes.io/projected/2097f21e-3f3a-435e-9003-15846c98efbd-kube-api-access-j9m5v\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.744166 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqwc4\" (UniqueName: \"kubernetes.io/projected/2e0a31fa-5416-4a94-a915-69f0561794bc-kube-api-access-lqwc4\") pod \"ovn-operator-controller-manager-55db956ddc-cv9xt\" (UID: \"2e0a31fa-5416-4a94-a915-69f0561794bc\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.744226 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.744260 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:44 crc kubenswrapper[4834]: E0121 14:47:44.745264 4834 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:44 crc kubenswrapper[4834]: E0121 14:47:44.745318 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert podName:8b4d1ded-fcb3-456e-9704-776079ec120f nodeName:}" failed. No retries permitted until 2026-01-21 14:47:45.745301377 +0000 UTC m=+1011.719650422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert") pod "infra-operator-controller-manager-77c48c7859-bwn92" (UID: "8b4d1ded-fcb3-456e-9704-776079ec120f") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:44 crc kubenswrapper[4834]: E0121 14:47:44.745603 4834 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:44 crc kubenswrapper[4834]: E0121 14:47:44.745633 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert podName:2097f21e-3f3a-435e-9003-15846c98efbd nodeName:}" failed. No retries permitted until 2026-01-21 14:47:45.245623627 +0000 UTC m=+1011.219972672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" (UID: "2097f21e-3f3a-435e-9003-15846c98efbd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.747351 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.748339 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.754022 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.755155 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.761708 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8zsr6" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.762018 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-h6pnw" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.779253 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.786713 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmxbn\" (UniqueName: \"kubernetes.io/projected/d2a8e7a2-c785-48e3-a143-30c89c49fe36-kube-api-access-kmxbn\") pod \"neutron-operator-controller-manager-cb4666565-nsvnj\" (UID: \"d2a8e7a2-c785-48e3-a143-30c89c49fe36\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.792531 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7tqd\" (UniqueName: \"kubernetes.io/projected/ff5898ae-a2fa-4ed4-9e56-74a5476ca185-kube-api-access-x7tqd\") pod \"octavia-operator-controller-manager-7fc9b76cf6-whz74\" (UID: \"ff5898ae-a2fa-4ed4-9e56-74a5476ca185\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.794450 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lt4q\" (UniqueName: \"kubernetes.io/projected/f3592517-3af0-41fd-bd16-da41fb583656-kube-api-access-5lt4q\") pod \"manila-operator-controller-manager-864f6b75bf-qmndz\" (UID: \"f3592517-3af0-41fd-bd16-da41fb583656\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.794606 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.794762 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9m5v\" (UniqueName: \"kubernetes.io/projected/2097f21e-3f3a-435e-9003-15846c98efbd-kube-api-access-j9m5v\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.795912 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75hhn\" (UniqueName: \"kubernetes.io/projected/aa317ccb-4317-40f7-8661-20a62f36dd97-kube-api-access-75hhn\") pod \"keystone-operator-controller-manager-767fdc4f47-lczqr\" (UID: \"aa317ccb-4317-40f7-8661-20a62f36dd97\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.802872 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.803921 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.809870 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gjmlg" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.841867 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.845874 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lclw5\" (UniqueName: \"kubernetes.io/projected/458dc6de-2219-419b-ab95-76a72a77a097-kube-api-access-lclw5\") pod \"telemetry-operator-controller-manager-5f8f495fcf-hlcxr\" (UID: \"458dc6de-2219-419b-ab95-76a72a77a097\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.846006 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqwc4\" (UniqueName: \"kubernetes.io/projected/2e0a31fa-5416-4a94-a915-69f0561794bc-kube-api-access-lqwc4\") pod \"ovn-operator-controller-manager-55db956ddc-cv9xt\" (UID: \"2e0a31fa-5416-4a94-a915-69f0561794bc\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.846070 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md9tz\" (UniqueName: \"kubernetes.io/projected/69493e6c-ad47-4a9e-aad4-b93cee5d4ac7-kube-api-access-md9tz\") pod \"swift-operator-controller-manager-85dd56d4cc-lt695\" (UID: \"69493e6c-ad47-4a9e-aad4-b93cee5d4ac7\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.846170 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf8hk\" (UniqueName: \"kubernetes.io/projected/e8086139-0fe1-4859-8e4c-94eea0dd6a18-kube-api-access-wf8hk\") pod \"placement-operator-controller-manager-686df47fcb-28wfl\" (UID: \"e8086139-0fe1-4859-8e4c-94eea0dd6a18\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.859108 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.864888 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.868453 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-5n84d" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.871914 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.876760 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqwc4\" (UniqueName: \"kubernetes.io/projected/2e0a31fa-5416-4a94-a915-69f0561794bc-kube-api-access-lqwc4\") pod \"ovn-operator-controller-manager-55db956ddc-cv9xt\" (UID: \"2e0a31fa-5416-4a94-a915-69f0561794bc\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.878408 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.889810 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.891474 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.893499 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-c8f8g" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.905584 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.909981 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.912344 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.923530 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.924587 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.928108 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.928228 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-86rdz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.928122 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.945340 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.949349 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.949390 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.949427 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvkbx\" (UniqueName: \"kubernetes.io/projected/941ce977-5cf7-49e2-b96d-8446fefa95cd-kube-api-access-zvkbx\") pod \"watcher-operator-controller-manager-64cd966744-44bkf\" (UID: \"941ce977-5cf7-49e2-b96d-8446fefa95cd\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.949463 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf8hk\" (UniqueName: \"kubernetes.io/projected/e8086139-0fe1-4859-8e4c-94eea0dd6a18-kube-api-access-wf8hk\") pod \"placement-operator-controller-manager-686df47fcb-28wfl\" (UID: \"e8086139-0fe1-4859-8e4c-94eea0dd6a18\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.949489 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52lq\" (UniqueName: \"kubernetes.io/projected/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-kube-api-access-x52lq\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.949528 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfpz\" (UniqueName: \"kubernetes.io/projected/5c2b9737-fefd-4db7-a60d-5343a8b44554-kube-api-access-pwfpz\") pod \"test-operator-controller-manager-7cd8bc9dbb-92r4m\" (UID: \"5c2b9737-fefd-4db7-a60d-5343a8b44554\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.949565 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lclw5\" (UniqueName: \"kubernetes.io/projected/458dc6de-2219-419b-ab95-76a72a77a097-kube-api-access-lclw5\") pod \"telemetry-operator-controller-manager-5f8f495fcf-hlcxr\" (UID: \"458dc6de-2219-419b-ab95-76a72a77a097\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.949637 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md9tz\" (UniqueName: \"kubernetes.io/projected/69493e6c-ad47-4a9e-aad4-b93cee5d4ac7-kube-api-access-md9tz\") pod \"swift-operator-controller-manager-85dd56d4cc-lt695\" (UID: \"69493e6c-ad47-4a9e-aad4-b93cee5d4ac7\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.960244 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.970832 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p"] Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.975454 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md9tz\" (UniqueName: \"kubernetes.io/projected/69493e6c-ad47-4a9e-aad4-b93cee5d4ac7-kube-api-access-md9tz\") pod \"swift-operator-controller-manager-85dd56d4cc-lt695\" (UID: \"69493e6c-ad47-4a9e-aad4-b93cee5d4ac7\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.976811 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf8hk\" (UniqueName: \"kubernetes.io/projected/e8086139-0fe1-4859-8e4c-94eea0dd6a18-kube-api-access-wf8hk\") pod \"placement-operator-controller-manager-686df47fcb-28wfl\" (UID: \"e8086139-0fe1-4859-8e4c-94eea0dd6a18\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" Jan 21 14:47:44 crc kubenswrapper[4834]: I0121 14:47:44.978087 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lclw5\" (UniqueName: \"kubernetes.io/projected/458dc6de-2219-419b-ab95-76a72a77a097-kube-api-access-lclw5\") pod \"telemetry-operator-controller-manager-5f8f495fcf-hlcxr\" (UID: \"458dc6de-2219-419b-ab95-76a72a77a097\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.006298 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.035306 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.046752 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.051212 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.051258 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.051302 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvkbx\" (UniqueName: \"kubernetes.io/projected/941ce977-5cf7-49e2-b96d-8446fefa95cd-kube-api-access-zvkbx\") pod \"watcher-operator-controller-manager-64cd966744-44bkf\" (UID: \"941ce977-5cf7-49e2-b96d-8446fefa95cd\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.051346 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x52lq\" (UniqueName: \"kubernetes.io/projected/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-kube-api-access-x52lq\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.051380 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfpz\" (UniqueName: \"kubernetes.io/projected/5c2b9737-fefd-4db7-a60d-5343a8b44554-kube-api-access-pwfpz\") pod \"test-operator-controller-manager-7cd8bc9dbb-92r4m\" (UID: \"5c2b9737-fefd-4db7-a60d-5343a8b44554\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.051463 4834 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.051535 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:45.551516538 +0000 UTC m=+1011.525865583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "metrics-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.051699 4834 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.051779 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:45.551757135 +0000 UTC m=+1011.526106180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "webhook-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.083382 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvkbx\" (UniqueName: \"kubernetes.io/projected/941ce977-5cf7-49e2-b96d-8446fefa95cd-kube-api-access-zvkbx\") pod \"watcher-operator-controller-manager-64cd966744-44bkf\" (UID: \"941ce977-5cf7-49e2-b96d-8446fefa95cd\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.085482 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfpz\" (UniqueName: \"kubernetes.io/projected/5c2b9737-fefd-4db7-a60d-5343a8b44554-kube-api-access-pwfpz\") pod \"test-operator-controller-manager-7cd8bc9dbb-92r4m\" (UID: \"5c2b9737-fefd-4db7-a60d-5343a8b44554\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.090582 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s"] Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.091605 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.094277 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tqg5b" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.105723 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s"] Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.150775 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.155420 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpjm\" (UniqueName: \"kubernetes.io/projected/bca7db69-2696-4e85-96dc-7c9140549f9a-kube-api-access-vvpjm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wnz6s\" (UID: \"bca7db69-2696-4e85-96dc-7c9140549f9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.186142 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.216083 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x52lq\" (UniqueName: \"kubernetes.io/projected/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-kube-api-access-x52lq\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.267208 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.275651 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.275726 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvpjm\" (UniqueName: \"kubernetes.io/projected/bca7db69-2696-4e85-96dc-7c9140549f9a-kube-api-access-vvpjm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wnz6s\" (UID: \"bca7db69-2696-4e85-96dc-7c9140549f9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.275845 4834 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.275958 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert podName:2097f21e-3f3a-435e-9003-15846c98efbd nodeName:}" failed. No retries permitted until 2026-01-21 14:47:46.275918866 +0000 UTC m=+1012.250267911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" (UID: "2097f21e-3f3a-435e-9003-15846c98efbd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.525135 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvpjm\" (UniqueName: \"kubernetes.io/projected/bca7db69-2696-4e85-96dc-7c9140549f9a-kube-api-access-vvpjm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wnz6s\" (UID: \"bca7db69-2696-4e85-96dc-7c9140549f9a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.557527 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.557610 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.558788 4834 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.558909 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:46.558877181 +0000 UTC m=+1012.533226226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "webhook-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.564844 4834 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.564973 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:46.564920809 +0000 UTC m=+1012.539269854 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "metrics-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.590585 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.627133 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x"] Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.685985 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.813408 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.814052 4834 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: E0121 14:47:45.814154 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert podName:8b4d1ded-fcb3-456e-9704-776079ec120f nodeName:}" failed. No retries permitted until 2026-01-21 14:47:47.814123701 +0000 UTC m=+1013.788472746 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert") pod "infra-operator-controller-manager-77c48c7859-bwn92" (UID: "8b4d1ded-fcb3-456e-9704-776079ec120f") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.927687 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" event={"ID":"00cc8ba5-f3f5-42e9-a23a-6c3b1989763b","Type":"ContainerStarted","Data":"7026b93ec031ce426ed644764d50c5ee8bc95c44854e1517c65223e3d84e4319"} Jan 21 14:47:45 crc kubenswrapper[4834]: I0121 14:47:45.966634 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64"] Jan 21 14:47:46 crc kubenswrapper[4834]: I0121 14:47:46.461395 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:46 crc kubenswrapper[4834]: E0121 14:47:46.461588 4834 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:46 crc kubenswrapper[4834]: E0121 14:47:46.461635 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert podName:2097f21e-3f3a-435e-9003-15846c98efbd nodeName:}" failed. No retries permitted until 2026-01-21 14:47:48.461620365 +0000 UTC m=+1014.435969410 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" (UID: "2097f21e-3f3a-435e-9003-15846c98efbd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:46 crc kubenswrapper[4834]: I0121 14:47:46.567326 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:46 crc kubenswrapper[4834]: I0121 14:47:46.567409 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:46 crc kubenswrapper[4834]: E0121 14:47:46.567528 4834 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:47:46 crc kubenswrapper[4834]: E0121 14:47:46.567622 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:48.567598 +0000 UTC m=+1014.541947075 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "webhook-server-cert" not found Jan 21 14:47:46 crc kubenswrapper[4834]: E0121 14:47:46.568565 4834 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:47:46 crc kubenswrapper[4834]: E0121 14:47:46.569022 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:48.568996694 +0000 UTC m=+1014.543345799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "metrics-server-cert" not found Jan 21 14:47:46 crc kubenswrapper[4834]: I0121 14:47:46.956590 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" event={"ID":"ae8ff7f9-4d1b-4562-a307-f9ad95966c48","Type":"ContainerStarted","Data":"f01fd69b6a718a62a0daae8aeda93a3e3f776137f4d1fa10767ae819a09180a3"} Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.203366 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86"] Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.228476 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc181377f_ebc2_4ebf_ba81_71d6609d37c4.slice/crio-278db58a5fa03a729e3e84f60ec0b57e3a363aacf2605cfbd0784cc43f56b008 WatchSource:0}: Error finding container 278db58a5fa03a729e3e84f60ec0b57e3a363aacf2605cfbd0784cc43f56b008: Status 404 returned error can't find the container with id 278db58a5fa03a729e3e84f60ec0b57e3a363aacf2605cfbd0784cc43f56b008 Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.242946 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac1f15b1_9c34_49e4_957a_74a950b6583f.slice/crio-47f4b76b019dba0ddaefc5623b046037270f3a96c4814de6f3c2376f0ac299ff WatchSource:0}: Error finding container 47f4b76b019dba0ddaefc5623b046037270f3a96c4814de6f3c2376f0ac299ff: Status 404 returned error can't find the container with id 47f4b76b019dba0ddaefc5623b046037270f3a96c4814de6f3c2376f0ac299ff Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.249105 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr"] Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.254522 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5"] Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.259452 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-x56p7"] Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.263997 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8728d4cf_ef2f_4e71_875f_6227ac7117db.slice/crio-3f1eb2db604679c016f8adfd60713e8ce0a9e8e1c961f929665a4762f4471ab4 WatchSource:0}: Error finding container 3f1eb2db604679c016f8adfd60713e8ce0a9e8e1c961f929665a4762f4471ab4: Status 404 returned error can't find the container with id 3f1eb2db604679c016f8adfd60713e8ce0a9e8e1c961f929665a4762f4471ab4 Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.264145 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d"] Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.419373 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj"] Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.426817 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74"] Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.432177 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98485c62_7908_4ca9_8436_71dd52f371df.slice/crio-07670e0bf7ca8e9af4868a8985e21e11d70b3e0c67aa098174f86a26089aed57 WatchSource:0}: Error finding container 07670e0bf7ca8e9af4868a8985e21e11d70b3e0c67aa098174f86a26089aed57: Status 404 returned error can't find the container with id 07670e0bf7ca8e9af4868a8985e21e11d70b3e0c67aa098174f86a26089aed57 Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.434505 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss"] Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.438193 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2a8e7a2_c785_48e3_a143_30c89c49fe36.slice/crio-f834c718108d413a2661f33ae0c758a91e8143485bacfb625a823cf9d2436ffc WatchSource:0}: Error finding container f834c718108d413a2661f33ae0c758a91e8143485bacfb625a823cf9d2436ffc: Status 404 returned error can't find the container with id f834c718108d413a2661f33ae0c758a91e8143485bacfb625a823cf9d2436ffc Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.441910 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl"] Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.448407 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e0a31fa_5416_4a94_a915_69f0561794bc.slice/crio-fa63d12a163832470fa9b0161947c456b7cefd791cc81ffc6fe0d134fd200698 WatchSource:0}: Error finding container fa63d12a163832470fa9b0161947c456b7cefd791cc81ffc6fe0d134fd200698: Status 404 returned error can't find the container with id fa63d12a163832470fa9b0161947c456b7cefd791cc81ffc6fe0d134fd200698 Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.449484 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml"] Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.453697 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d010ccc_9dc9_4d66_9544_354bc82380ca.slice/crio-ac5b4202ea666d5e4d1387dde678e97cc2d208dba02a8d1fc3f150325f92fc2b WatchSource:0}: Error finding container ac5b4202ea666d5e4d1387dde678e97cc2d208dba02a8d1fc3f150325f92fc2b: Status 404 returned error can't find the container with id ac5b4202ea666d5e4d1387dde678e97cc2d208dba02a8d1fc3f150325f92fc2b Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.462335 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt"] Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.595625 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz"] Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.610001 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s"] Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.614098 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbca7db69_2696_4e85_96dc_7c9140549f9a.slice/crio-7cfe0526875536014d36f8fb86d9498bb0c204e476cff1e2f6e5d50015b4b34c WatchSource:0}: Error finding container 7cfe0526875536014d36f8fb86d9498bb0c204e476cff1e2f6e5d50015b4b34c: Status 404 returned error can't find the container with id 7cfe0526875536014d36f8fb86d9498bb0c204e476cff1e2f6e5d50015b4b34c Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.616463 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m"] Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.618095 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wnz6s_openstack-operators(bca7db69-2696-4e85-96dc-7c9140549f9a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.618362 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3592517_3af0_41fd_bd16_da41fb583656.slice/crio-dd7ad58e601ee49a13bdd5f0d7810752cf22a1c075203363140929e75a8fc604 WatchSource:0}: Error finding container dd7ad58e601ee49a13bdd5f0d7810752cf22a1c075203363140929e75a8fc604: Status 404 returned error can't find the container with id dd7ad58e601ee49a13bdd5f0d7810752cf22a1c075203363140929e75a8fc604 Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.619796 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" podUID="bca7db69-2696-4e85-96dc-7c9140549f9a" Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.625304 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c2b9737_fefd_4db7_a60d_5343a8b44554.slice/crio-1eecd387635359474b7e87b7e0561306b8a461a109f2e01d936095148fdbcb13 WatchSource:0}: Error finding container 1eecd387635359474b7e87b7e0561306b8a461a109f2e01d936095148fdbcb13: Status 404 returned error can't find the container with id 1eecd387635359474b7e87b7e0561306b8a461a109f2e01d936095148fdbcb13 Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.626003 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lt4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-qmndz_openstack-operators(f3592517-3af0-41fd-bd16-da41fb583656): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.627820 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" podUID="f3592517-3af0-41fd-bd16-da41fb583656" Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.630163 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf"] Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.636663 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr"] Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.640495 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa317ccb_4317_40f7_8661_20a62f36dd97.slice/crio-911e3e3e7699f32a7aa37a790331100507b11c6f8b1c28170edde130f20fca01 WatchSource:0}: Error finding container 911e3e3e7699f32a7aa37a790331100507b11c6f8b1c28170edde130f20fca01: Status 404 returned error can't find the container with id 911e3e3e7699f32a7aa37a790331100507b11c6f8b1c28170edde130f20fca01 Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.642982 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695"] Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.644343 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75hhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-lczqr_openstack-operators(aa317ccb-4317-40f7-8661-20a62f36dd97): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:47:47 crc kubenswrapper[4834]: W0121 14:47:47.644725 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod458dc6de_2219_419b_ab95_76a72a77a097.slice/crio-5397ee262db527192496a7043e742198a87103b26200a0ab76541c9447f30f3b WatchSource:0}: Error finding container 5397ee262db527192496a7043e742198a87103b26200a0ab76541c9447f30f3b: Status 404 returned error can't find the container with id 5397ee262db527192496a7043e742198a87103b26200a0ab76541c9447f30f3b Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.645788 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" podUID="aa317ccb-4317-40f7-8661-20a62f36dd97" Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.649716 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr"] Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.650848 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lclw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-hlcxr_openstack-operators(458dc6de-2219-419b-ab95-76a72a77a097): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.651466 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvkbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-44bkf_openstack-operators(941ce977-5cf7-49e2-b96d-8446fefa95cd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.652173 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" podUID="458dc6de-2219-419b-ab95-76a72a77a097" Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.652888 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" podUID="941ce977-5cf7-49e2-b96d-8446fefa95cd" Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.675788 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-md9tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-lt695_openstack-operators(69493e6c-ad47-4a9e-aad4-b93cee5d4ac7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.679039 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" podUID="69493e6c-ad47-4a9e-aad4-b93cee5d4ac7" Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.843302 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.843547 4834 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.843674 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert podName:8b4d1ded-fcb3-456e-9704-776079ec120f nodeName:}" failed. No retries permitted until 2026-01-21 14:47:51.843642526 +0000 UTC m=+1017.817991571 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert") pod "infra-operator-controller-manager-77c48c7859-bwn92" (UID: "8b4d1ded-fcb3-456e-9704-776079ec120f") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.971136 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" event={"ID":"2bf8d583-f505-436a-a60f-ec418f4d5e94","Type":"ContainerStarted","Data":"e86b98941e5870c3fe2d00de79eebe25065875261dfb0e648f7df5f7cc25b8bd"} Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.973394 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" event={"ID":"aa317ccb-4317-40f7-8661-20a62f36dd97","Type":"ContainerStarted","Data":"911e3e3e7699f32a7aa37a790331100507b11c6f8b1c28170edde130f20fca01"} Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.974711 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" event={"ID":"8728d4cf-ef2f-4e71-875f-6227ac7117db","Type":"ContainerStarted","Data":"3f1eb2db604679c016f8adfd60713e8ce0a9e8e1c961f929665a4762f4471ab4"} Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.975594 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" event={"ID":"941ce977-5cf7-49e2-b96d-8446fefa95cd","Type":"ContainerStarted","Data":"3f3e4a08285e5b77edc2fa5ba7e54bd94499b61517f77b2e6f19c9876ca2c4f0"} Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.976114 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" podUID="aa317ccb-4317-40f7-8661-20a62f36dd97" Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.976508 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" podUID="941ce977-5cf7-49e2-b96d-8446fefa95cd" Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.977679 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" event={"ID":"98485c62-7908-4ca9-8436-71dd52f371df","Type":"ContainerStarted","Data":"07670e0bf7ca8e9af4868a8985e21e11d70b3e0c67aa098174f86a26089aed57"} Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.986317 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" event={"ID":"ff5898ae-a2fa-4ed4-9e56-74a5476ca185","Type":"ContainerStarted","Data":"718decc69e16be2a66849bd1500c6e311469061fb57ac27c537d8b2c07d11df8"} Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.989919 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" event={"ID":"bca7db69-2696-4e85-96dc-7c9140549f9a","Type":"ContainerStarted","Data":"7cfe0526875536014d36f8fb86d9498bb0c204e476cff1e2f6e5d50015b4b34c"} Jan 21 14:47:47 crc kubenswrapper[4834]: E0121 14:47:47.991377 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" podUID="bca7db69-2696-4e85-96dc-7c9140549f9a" Jan 21 14:47:47 crc kubenswrapper[4834]: I0121 14:47:47.996564 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" event={"ID":"f3592517-3af0-41fd-bd16-da41fb583656","Type":"ContainerStarted","Data":"dd7ad58e601ee49a13bdd5f0d7810752cf22a1c075203363140929e75a8fc604"} Jan 21 14:47:48 crc kubenswrapper[4834]: E0121 14:47:47.999177 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" podUID="f3592517-3af0-41fd-bd16-da41fb583656" Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.000383 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" event={"ID":"5c2b9737-fefd-4db7-a60d-5343a8b44554","Type":"ContainerStarted","Data":"1eecd387635359474b7e87b7e0561306b8a461a109f2e01d936095148fdbcb13"} Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.003441 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" event={"ID":"c181377f-ebc2-4ebf-ba81-71d6609d37c4","Type":"ContainerStarted","Data":"278db58a5fa03a729e3e84f60ec0b57e3a363aacf2605cfbd0784cc43f56b008"} Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.004706 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" event={"ID":"d2a8e7a2-c785-48e3-a143-30c89c49fe36","Type":"ContainerStarted","Data":"f834c718108d413a2661f33ae0c758a91e8143485bacfb625a823cf9d2436ffc"} Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.006393 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" event={"ID":"69493e6c-ad47-4a9e-aad4-b93cee5d4ac7","Type":"ContainerStarted","Data":"a843386d2a29e9b9301a11b509a52e300dc64b1cdce0d1e863e491bd2071b263"} Jan 21 14:47:48 crc kubenswrapper[4834]: E0121 14:47:48.009804 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" podUID="69493e6c-ad47-4a9e-aad4-b93cee5d4ac7" Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.020821 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" event={"ID":"2e0a31fa-5416-4a94-a915-69f0561794bc","Type":"ContainerStarted","Data":"fa63d12a163832470fa9b0161947c456b7cefd791cc81ffc6fe0d134fd200698"} Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.049420 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" event={"ID":"458dc6de-2219-419b-ab95-76a72a77a097","Type":"ContainerStarted","Data":"5397ee262db527192496a7043e742198a87103b26200a0ab76541c9447f30f3b"} Jan 21 14:47:48 crc kubenswrapper[4834]: E0121 14:47:48.077880 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" podUID="458dc6de-2219-419b-ab95-76a72a77a097" Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.080196 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" event={"ID":"ac1f15b1-9c34-49e4-957a-74a950b6583f","Type":"ContainerStarted","Data":"47f4b76b019dba0ddaefc5623b046037270f3a96c4814de6f3c2376f0ac299ff"} Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.091720 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" event={"ID":"4d010ccc-9dc9-4d66-9544-354bc82380ca","Type":"ContainerStarted","Data":"ac5b4202ea666d5e4d1387dde678e97cc2d208dba02a8d1fc3f150325f92fc2b"} Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.106120 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" event={"ID":"e8086139-0fe1-4859-8e4c-94eea0dd6a18","Type":"ContainerStarted","Data":"2c00d4e61267f27d4d3a7bf2cf619ebbd5eb5d846d848e7f61ee008da4fa2f76"} Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.109022 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" event={"ID":"dded3929-2919-4903-8465-da99004a3cd6","Type":"ContainerStarted","Data":"65131771822a117027d8a8cacb05a5c9460bf460f41e390ddef954eaa737f07a"} Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.554279 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:48 crc kubenswrapper[4834]: E0121 14:47:48.554571 4834 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:48 crc kubenswrapper[4834]: E0121 14:47:48.554620 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert podName:2097f21e-3f3a-435e-9003-15846c98efbd nodeName:}" failed. No retries permitted until 2026-01-21 14:47:52.55460636 +0000 UTC m=+1018.528955405 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" (UID: "2097f21e-3f3a-435e-9003-15846c98efbd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.675264 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:48 crc kubenswrapper[4834]: I0121 14:47:48.675715 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:48 crc kubenswrapper[4834]: E0121 14:47:48.675902 4834 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:47:48 crc kubenswrapper[4834]: E0121 14:47:48.675983 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:52.675964275 +0000 UTC m=+1018.650313320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "metrics-server-cert" not found Jan 21 14:47:48 crc kubenswrapper[4834]: E0121 14:47:48.676117 4834 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:47:48 crc kubenswrapper[4834]: E0121 14:47:48.676212 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:47:52.676189892 +0000 UTC m=+1018.650539017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "webhook-server-cert" not found Jan 21 14:47:49 crc kubenswrapper[4834]: E0121 14:47:49.123656 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" podUID="aa317ccb-4317-40f7-8661-20a62f36dd97" Jan 21 14:47:49 crc kubenswrapper[4834]: E0121 14:47:49.123743 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" podUID="458dc6de-2219-419b-ab95-76a72a77a097" Jan 21 14:47:49 crc kubenswrapper[4834]: E0121 14:47:49.123817 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" podUID="f3592517-3af0-41fd-bd16-da41fb583656" Jan 21 14:47:49 crc kubenswrapper[4834]: E0121 14:47:49.123885 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" podUID="bca7db69-2696-4e85-96dc-7c9140549f9a" Jan 21 14:47:49 crc kubenswrapper[4834]: E0121 14:47:49.123941 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" podUID="941ce977-5cf7-49e2-b96d-8446fefa95cd" Jan 21 14:47:49 crc kubenswrapper[4834]: E0121 14:47:49.137863 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" podUID="69493e6c-ad47-4a9e-aad4-b93cee5d4ac7" Jan 21 14:47:51 crc kubenswrapper[4834]: I0121 14:47:51.875136 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:51 crc kubenswrapper[4834]: E0121 14:47:51.875327 4834 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:51 crc kubenswrapper[4834]: E0121 14:47:51.875417 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert podName:8b4d1ded-fcb3-456e-9704-776079ec120f nodeName:}" failed. No retries permitted until 2026-01-21 14:47:59.875396848 +0000 UTC m=+1025.849745893 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert") pod "infra-operator-controller-manager-77c48c7859-bwn92" (UID: "8b4d1ded-fcb3-456e-9704-776079ec120f") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:52 crc kubenswrapper[4834]: I0121 14:47:52.582856 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:47:52 crc kubenswrapper[4834]: E0121 14:47:52.583074 4834 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:52 crc kubenswrapper[4834]: E0121 14:47:52.583159 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert podName:2097f21e-3f3a-435e-9003-15846c98efbd nodeName:}" failed. No retries permitted until 2026-01-21 14:48:00.583139621 +0000 UTC m=+1026.557488666 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" (UID: "2097f21e-3f3a-435e-9003-15846c98efbd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:47:52 crc kubenswrapper[4834]: I0121 14:47:52.715365 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:52 crc kubenswrapper[4834]: I0121 14:47:52.715436 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:47:52 crc kubenswrapper[4834]: E0121 14:47:52.715488 4834 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:47:52 crc kubenswrapper[4834]: E0121 14:47:52.715558 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:48:00.71553865 +0000 UTC m=+1026.689887705 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "webhook-server-cert" not found Jan 21 14:47:52 crc kubenswrapper[4834]: E0121 14:47:52.715603 4834 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:47:52 crc kubenswrapper[4834]: E0121 14:47:52.715700 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:48:00.715681125 +0000 UTC m=+1026.690030170 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "metrics-server-cert" not found Jan 21 14:47:59 crc kubenswrapper[4834]: I0121 14:47:59.960964 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:47:59 crc kubenswrapper[4834]: E0121 14:47:59.961798 4834 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 14:47:59 crc kubenswrapper[4834]: E0121 14:47:59.962017 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert podName:8b4d1ded-fcb3-456e-9704-776079ec120f nodeName:}" failed. No retries permitted until 2026-01-21 14:48:15.962002292 +0000 UTC m=+1041.936351337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert") pod "infra-operator-controller-manager-77c48c7859-bwn92" (UID: "8b4d1ded-fcb3-456e-9704-776079ec120f") : secret "infra-operator-webhook-server-cert" not found Jan 21 14:48:00 crc kubenswrapper[4834]: I0121 14:48:00.663824 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:48:00 crc kubenswrapper[4834]: E0121 14:48:00.664027 4834 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:48:00 crc kubenswrapper[4834]: E0121 14:48:00.664430 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert podName:2097f21e-3f3a-435e-9003-15846c98efbd nodeName:}" failed. No retries permitted until 2026-01-21 14:48:16.664402638 +0000 UTC m=+1042.638751683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert") pod "openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" (UID: "2097f21e-3f3a-435e-9003-15846c98efbd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 14:48:00 crc kubenswrapper[4834]: I0121 14:48:00.765058 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:48:00 crc kubenswrapper[4834]: I0121 14:48:00.765120 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:48:00 crc kubenswrapper[4834]: E0121 14:48:00.765322 4834 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 14:48:00 crc kubenswrapper[4834]: E0121 14:48:00.765373 4834 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 14:48:00 crc kubenswrapper[4834]: E0121 14:48:00.765449 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:48:16.765429129 +0000 UTC m=+1042.739778174 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "webhook-server-cert" not found Jan 21 14:48:00 crc kubenswrapper[4834]: E0121 14:48:00.765471 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs podName:ce4027ee-3ff6-4f48-8eee-cac190eac5f9 nodeName:}" failed. No retries permitted until 2026-01-21 14:48:16.7654623 +0000 UTC m=+1042.739811345 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs") pod "openstack-operator-controller-manager-75bfd788c8-d4x4p" (UID: "ce4027ee-3ff6-4f48-8eee-cac190eac5f9") : secret "metrics-server-cert" not found Jan 21 14:48:01 crc kubenswrapper[4834]: E0121 14:48:01.401584 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 14:48:01 crc kubenswrapper[4834]: E0121 14:48:01.401805 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fpnjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-v7r86_openstack-operators(ac1f15b1-9c34-49e4-957a-74a950b6583f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:01 crc kubenswrapper[4834]: E0121 14:48:01.403091 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" podUID="ac1f15b1-9c34-49e4-957a-74a950b6583f" Jan 21 14:48:02 crc kubenswrapper[4834]: E0121 14:48:02.363912 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" podUID="ac1f15b1-9c34-49e4-957a-74a950b6583f" Jan 21 14:48:02 crc kubenswrapper[4834]: E0121 14:48:02.602368 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525" Jan 21 14:48:02 crc kubenswrapper[4834]: E0121 14:48:02.602563 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2jxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-78757b4889-d5jml_openstack-operators(4d010ccc-9dc9-4d66-9544-354bc82380ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:02 crc kubenswrapper[4834]: E0121 14:48:02.603904 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" podUID="4d010ccc-9dc9-4d66-9544-354bc82380ca" Jan 21 14:48:03 crc kubenswrapper[4834]: E0121 14:48:03.372040 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" podUID="4d010ccc-9dc9-4d66-9544-354bc82380ca" Jan 21 14:48:03 crc kubenswrapper[4834]: E0121 14:48:03.741845 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 14:48:03 crc kubenswrapper[4834]: E0121 14:48:03.742111 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hswg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-dfmb5_openstack-operators(2bf8d583-f505-436a-a60f-ec418f4d5e94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:03 crc kubenswrapper[4834]: E0121 14:48:03.743841 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" podUID="2bf8d583-f505-436a-a60f-ec418f4d5e94" Jan 21 14:48:04 crc kubenswrapper[4834]: E0121 14:48:04.377426 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" podUID="2bf8d583-f505-436a-a60f-ec418f4d5e94" Jan 21 14:48:05 crc kubenswrapper[4834]: E0121 14:48:05.906693 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 14:48:05 crc kubenswrapper[4834]: E0121 14:48:05.906885 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kmxbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-nsvnj_openstack-operators(d2a8e7a2-c785-48e3-a143-30c89c49fe36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:05 crc kubenswrapper[4834]: E0121 14:48:05.908134 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" podUID="d2a8e7a2-c785-48e3-a143-30c89c49fe36" Jan 21 14:48:06 crc kubenswrapper[4834]: E0121 14:48:06.402713 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" podUID="d2a8e7a2-c785-48e3-a143-30c89c49fe36" Jan 21 14:48:07 crc kubenswrapper[4834]: E0121 14:48:07.812199 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 21 14:48:07 crc kubenswrapper[4834]: E0121 14:48:07.812760 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x5fz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-2dj64_openstack-operators(ae8ff7f9-4d1b-4562-a307-f9ad95966c48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:07 crc kubenswrapper[4834]: E0121 14:48:07.814117 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" podUID="ae8ff7f9-4d1b-4562-a307-f9ad95966c48" Jan 21 14:48:08 crc kubenswrapper[4834]: E0121 14:48:08.586519 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" podUID="ae8ff7f9-4d1b-4562-a307-f9ad95966c48" Jan 21 14:48:09 crc kubenswrapper[4834]: E0121 14:48:09.034396 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 14:48:09 crc kubenswrapper[4834]: E0121 14:48:09.034600 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xp9x5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-jvfvr_openstack-operators(c181377f-ebc2-4ebf-ba81-71d6609d37c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:09 crc kubenswrapper[4834]: E0121 14:48:09.035797 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" podUID="c181377f-ebc2-4ebf-ba81-71d6609d37c4" Jan 21 14:48:09 crc kubenswrapper[4834]: E0121 14:48:09.600743 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" podUID="c181377f-ebc2-4ebf-ba81-71d6609d37c4" Jan 21 14:48:13 crc kubenswrapper[4834]: E0121 14:48:13.018996 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737" Jan 21 14:48:13 crc kubenswrapper[4834]: E0121 14:48:13.019620 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wf8hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-28wfl_openstack-operators(e8086139-0fe1-4859-8e4c-94eea0dd6a18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:13 crc kubenswrapper[4834]: E0121 14:48:13.021285 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" podUID="e8086139-0fe1-4859-8e4c-94eea0dd6a18" Jan 21 14:48:13 crc kubenswrapper[4834]: E0121 14:48:13.486211 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 14:48:13 crc kubenswrapper[4834]: E0121 14:48:13.486721 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bb644,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-7ww2d_openstack-operators(8728d4cf-ef2f-4e71-875f-6227ac7117db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:13 crc kubenswrapper[4834]: E0121 14:48:13.487977 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" podUID="8728d4cf-ef2f-4e71-875f-6227ac7117db" Jan 21 14:48:13 crc kubenswrapper[4834]: E0121 14:48:13.626789 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" podUID="e8086139-0fe1-4859-8e4c-94eea0dd6a18" Jan 21 14:48:13 crc kubenswrapper[4834]: E0121 14:48:13.626894 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" podUID="8728d4cf-ef2f-4e71-875f-6227ac7117db" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.039343 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.047137 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4d1ded-fcb3-456e-9704-776079ec120f-cert\") pod \"infra-operator-controller-manager-77c48c7859-bwn92\" (UID: \"8b4d1ded-fcb3-456e-9704-776079ec120f\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.235614 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nfz46" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.244103 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.750661 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.754603 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2097f21e-3f3a-435e-9003-15846c98efbd-cert\") pod \"openstack-baremetal-operator-controller-manager-5b9875986ddp2nz\" (UID: \"2097f21e-3f3a-435e-9003-15846c98efbd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.851774 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.851822 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.855248 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-webhook-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:48:16 crc kubenswrapper[4834]: I0121 14:48:16.856007 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce4027ee-3ff6-4f48-8eee-cac190eac5f9-metrics-certs\") pod \"openstack-operator-controller-manager-75bfd788c8-d4x4p\" (UID: \"ce4027ee-3ff6-4f48-8eee-cac190eac5f9\") " pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:48:17 crc kubenswrapper[4834]: I0121 14:48:17.028270 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zzljs" Jan 21 14:48:17 crc kubenswrapper[4834]: I0121 14:48:17.036588 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:48:17 crc kubenswrapper[4834]: I0121 14:48:17.095754 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-86rdz" Jan 21 14:48:17 crc kubenswrapper[4834]: I0121 14:48:17.103587 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:48:22 crc kubenswrapper[4834]: E0121 14:48:22.571822 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 21 14:48:22 crc kubenswrapper[4834]: E0121 14:48:22.572607 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5lt4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-qmndz_openstack-operators(f3592517-3af0-41fd-bd16-da41fb583656): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:22 crc kubenswrapper[4834]: E0121 14:48:22.573715 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" podUID="f3592517-3af0-41fd-bd16-da41fb583656" Jan 21 14:48:25 crc kubenswrapper[4834]: E0121 14:48:25.781722 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 14:48:25 crc kubenswrapper[4834]: E0121 14:48:25.784125 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lclw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-hlcxr_openstack-operators(458dc6de-2219-419b-ab95-76a72a77a097): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:25 crc kubenswrapper[4834]: E0121 14:48:25.785352 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" podUID="458dc6de-2219-419b-ab95-76a72a77a097" Jan 21 14:48:28 crc kubenswrapper[4834]: E0121 14:48:28.917634 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad" Jan 21 14:48:28 crc kubenswrapper[4834]: E0121 14:48:28.918072 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvkbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-44bkf_openstack-operators(941ce977-5cf7-49e2-b96d-8446fefa95cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:28 crc kubenswrapper[4834]: E0121 14:48:28.919257 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" podUID="941ce977-5cf7-49e2-b96d-8446fefa95cd" Jan 21 14:48:29 crc kubenswrapper[4834]: E0121 14:48:29.978224 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 14:48:29 crc kubenswrapper[4834]: E0121 14:48:29.978810 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75hhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-lczqr_openstack-operators(aa317ccb-4317-40f7-8661-20a62f36dd97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:29 crc kubenswrapper[4834]: E0121 14:48:29.979997 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" podUID="aa317ccb-4317-40f7-8661-20a62f36dd97" Jan 21 14:48:30 crc kubenswrapper[4834]: E0121 14:48:30.419435 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 14:48:30 crc kubenswrapper[4834]: E0121 14:48:30.420179 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvpjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wnz6s_openstack-operators(bca7db69-2696-4e85-96dc-7c9140549f9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:48:30 crc kubenswrapper[4834]: E0121 14:48:30.421338 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" podUID="bca7db69-2696-4e85-96dc-7c9140549f9a" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.110544 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92"] Jan 21 14:48:31 crc kubenswrapper[4834]: W0121 14:48:31.139769 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b4d1ded_fcb3_456e_9704_776079ec120f.slice/crio-45d63393b1b87c45f5b25a7a3492f122bb93c245f61e034b5af60092c11075b1 WatchSource:0}: Error finding container 45d63393b1b87c45f5b25a7a3492f122bb93c245f61e034b5af60092c11075b1: Status 404 returned error can't find the container with id 45d63393b1b87c45f5b25a7a3492f122bb93c245f61e034b5af60092c11075b1 Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.467471 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p"] Jan 21 14:48:31 crc kubenswrapper[4834]: W0121 14:48:31.732498 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce4027ee_3ff6_4f48_8eee_cac190eac5f9.slice/crio-f5b44f06dc811f48a23fdec203c22a50ba4586f1986650da8f5d8f8f3144d4ed WatchSource:0}: Error finding container f5b44f06dc811f48a23fdec203c22a50ba4586f1986650da8f5d8f8f3144d4ed: Status 404 returned error can't find the container with id f5b44f06dc811f48a23fdec203c22a50ba4586f1986650da8f5d8f8f3144d4ed Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.767747 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" event={"ID":"2bf8d583-f505-436a-a60f-ec418f4d5e94","Type":"ContainerStarted","Data":"5023ea32d3fa713b7972c1c1745b6b63cac129f5f32ac60a95863b49f1c70628"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.768066 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.782815 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz"] Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.785404 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" event={"ID":"ff5898ae-a2fa-4ed4-9e56-74a5476ca185","Type":"ContainerStarted","Data":"d805ee1d136c6557e32f0f1e5696748e31fd2eb0449bf84d2d94578964bcd25d"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.785763 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.789500 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" event={"ID":"8b4d1ded-fcb3-456e-9704-776079ec120f","Type":"ContainerStarted","Data":"45d63393b1b87c45f5b25a7a3492f122bb93c245f61e034b5af60092c11075b1"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.812033 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" podStartSLOduration=5.312887348 podStartE2EDuration="48.812013722s" podCreationTimestamp="2026-01-21 14:47:43 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.25858552 +0000 UTC m=+1013.232934565" lastFinishedPulling="2026-01-21 14:48:30.757711894 +0000 UTC m=+1056.732060939" observedRunningTime="2026-01-21 14:48:31.80712216 +0000 UTC m=+1057.781471205" watchObservedRunningTime="2026-01-21 14:48:31.812013722 +0000 UTC m=+1057.786362767" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.847807 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" event={"ID":"2e0a31fa-5416-4a94-a915-69f0561794bc","Type":"ContainerStarted","Data":"2fcdff4cdc66fd482d303df931543348e56c87a739e328f7f37c2af250186c18"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.848611 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.876667 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" event={"ID":"d2a8e7a2-c785-48e3-a143-30c89c49fe36","Type":"ContainerStarted","Data":"c4a48a92483880996b4aa106d3d1d597040f50a2ba4b8d1ac87631144cd0f158"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.877615 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.888490 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" event={"ID":"dded3929-2919-4903-8465-da99004a3cd6","Type":"ContainerStarted","Data":"62da270f243de1a99274c5baa20fb93172769815df2732445c5f2156b9019c1b"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.889484 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.897394 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" event={"ID":"ae8ff7f9-4d1b-4562-a307-f9ad95966c48","Type":"ContainerStarted","Data":"7e3fbc2496ca89dc59ee40545b5ea16717aa7169e97b61482122a865cc9d9b0d"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.898228 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.903450 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" event={"ID":"98485c62-7908-4ca9-8436-71dd52f371df","Type":"ContainerStarted","Data":"395aa8e79b339045187646d999f9dd9dfe0be0ac40cc7ef9c2980fe0b4c0dd6a"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.904204 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.906061 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" event={"ID":"ac1f15b1-9c34-49e4-957a-74a950b6583f","Type":"ContainerStarted","Data":"036d913c72cb235a197c258f5115a200a9b91c5ea2478e3e53610415f92291ce"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.906519 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.907912 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" event={"ID":"69493e6c-ad47-4a9e-aad4-b93cee5d4ac7","Type":"ContainerStarted","Data":"5d281308778e4a8ec293dea014778aa06f0b8e6a89b24c0c0a68761e9c938826"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.908322 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.909947 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" event={"ID":"ce4027ee-3ff6-4f48-8eee-cac190eac5f9","Type":"ContainerStarted","Data":"f5b44f06dc811f48a23fdec203c22a50ba4586f1986650da8f5d8f8f3144d4ed"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.911163 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" event={"ID":"5c2b9737-fefd-4db7-a60d-5343a8b44554","Type":"ContainerStarted","Data":"7c673fdb30bb77a1071a887fabb471ba03652ff48536a309f02ba8054d859974"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.911725 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.912896 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" event={"ID":"c181377f-ebc2-4ebf-ba81-71d6609d37c4","Type":"ContainerStarted","Data":"6ba53280edc5c0efbd54644d347e116628c33a40db8c1162a6064fb181406b7b"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.913317 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.914866 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" event={"ID":"00cc8ba5-f3f5-42e9-a23a-6c3b1989763b","Type":"ContainerStarted","Data":"6a5ca222cb4fc1165b451ed1dc71f7e220b53dd89e2f73f095bdc90e3fca1da4"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.915311 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.917008 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" event={"ID":"4d010ccc-9dc9-4d66-9544-354bc82380ca","Type":"ContainerStarted","Data":"9b69d26ca24b2f4ed1d1173b2ae2e7f4c12803ad5b6537b513216cf7b2e58594"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.917443 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.921231 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" event={"ID":"8728d4cf-ef2f-4e71-875f-6227ac7117db","Type":"ContainerStarted","Data":"ee1076e167b6a77b2513668bd2513ad0a07e51e8204fdaa8961c5776ab97376e"} Jan 21 14:48:31 crc kubenswrapper[4834]: I0121 14:48:31.922105 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" Jan 21 14:48:32 crc kubenswrapper[4834]: I0121 14:48:32.043883 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" podStartSLOduration=6.594580797 podStartE2EDuration="48.043847609s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.459351432 +0000 UTC m=+1013.433700477" lastFinishedPulling="2026-01-21 14:48:28.908618244 +0000 UTC m=+1054.882967289" observedRunningTime="2026-01-21 14:48:32.025535679 +0000 UTC m=+1057.999884724" watchObservedRunningTime="2026-01-21 14:48:32.043847609 +0000 UTC m=+1058.018196654" Jan 21 14:48:32 crc kubenswrapper[4834]: I0121 14:48:32.551859 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" podStartSLOduration=7.278710094 podStartE2EDuration="48.551832702s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.635341081 +0000 UTC m=+1013.609690126" lastFinishedPulling="2026-01-21 14:48:28.908463689 +0000 UTC m=+1054.882812734" observedRunningTime="2026-01-21 14:48:32.480692867 +0000 UTC m=+1058.455041912" watchObservedRunningTime="2026-01-21 14:48:32.551832702 +0000 UTC m=+1058.526181747" Jan 21 14:48:32 crc kubenswrapper[4834]: I0121 14:48:32.590404 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" podStartSLOduration=6.124060728 podStartE2EDuration="49.590385042s" podCreationTimestamp="2026-01-21 14:47:43 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.233161517 +0000 UTC m=+1013.207510552" lastFinishedPulling="2026-01-21 14:48:30.699485811 +0000 UTC m=+1056.673834866" observedRunningTime="2026-01-21 14:48:32.577675686 +0000 UTC m=+1058.552024741" watchObservedRunningTime="2026-01-21 14:48:32.590385042 +0000 UTC m=+1058.564734077" Jan 21 14:48:32 crc kubenswrapper[4834]: I0121 14:48:32.646986 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" podStartSLOduration=5.338771464 podStartE2EDuration="48.646959383s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.446062017 +0000 UTC m=+1013.420411062" lastFinishedPulling="2026-01-21 14:48:30.754249936 +0000 UTC m=+1056.728598981" observedRunningTime="2026-01-21 14:48:32.643440683 +0000 UTC m=+1058.617789728" watchObservedRunningTime="2026-01-21 14:48:32.646959383 +0000 UTC m=+1058.621308438" Jan 21 14:48:32 crc kubenswrapper[4834]: I0121 14:48:32.659391 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" podStartSLOduration=5.874530234 podStartE2EDuration="48.659359658s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.675635897 +0000 UTC m=+1013.649984942" lastFinishedPulling="2026-01-21 14:48:30.460465321 +0000 UTC m=+1056.434814366" observedRunningTime="2026-01-21 14:48:32.612643074 +0000 UTC m=+1058.586992119" watchObservedRunningTime="2026-01-21 14:48:32.659359658 +0000 UTC m=+1058.633708713" Jan 21 14:48:32 crc kubenswrapper[4834]: I0121 14:48:32.820723 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" podStartSLOduration=6.325335734 podStartE2EDuration="49.820700371s" podCreationTimestamp="2026-01-21 14:47:43 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.246363849 +0000 UTC m=+1013.220712894" lastFinishedPulling="2026-01-21 14:48:30.741728486 +0000 UTC m=+1056.716077531" observedRunningTime="2026-01-21 14:48:32.710061177 +0000 UTC m=+1058.684410232" watchObservedRunningTime="2026-01-21 14:48:32.820700371 +0000 UTC m=+1058.795049416" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:32.982025 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" podStartSLOduration=20.915653571 podStartE2EDuration="49.981984602s" podCreationTimestamp="2026-01-21 14:47:43 +0000 UTC" firstStartedPulling="2026-01-21 14:47:45.590365953 +0000 UTC m=+1011.564714998" lastFinishedPulling="2026-01-21 14:48:14.656696964 +0000 UTC m=+1040.631046029" observedRunningTime="2026-01-21 14:48:32.82964076 +0000 UTC m=+1058.803989805" watchObservedRunningTime="2026-01-21 14:48:32.981984602 +0000 UTC m=+1058.956333647" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:32.991813 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" podStartSLOduration=21.781222002 podStartE2EDuration="48.991775907s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.446323425 +0000 UTC m=+1013.420672470" lastFinishedPulling="2026-01-21 14:48:14.65687733 +0000 UTC m=+1040.631226375" observedRunningTime="2026-01-21 14:48:32.991772587 +0000 UTC m=+1058.966121652" watchObservedRunningTime="2026-01-21 14:48:32.991775907 +0000 UTC m=+1058.966124952" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.014583 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" event={"ID":"2097f21e-3f3a-435e-9003-15846c98efbd","Type":"ContainerStarted","Data":"e00c2eb5f36ccd1ad76e9985cdbf78b41b94751441a180a4f027a48a23e4255e"} Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.035771 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" event={"ID":"e8086139-0fe1-4859-8e4c-94eea0dd6a18","Type":"ContainerStarted","Data":"5a8cb70ba73eb4dfa7ea1b0253933a55af6c52a3e7db142e2d87490706acca5d"} Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.036739 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.074120 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" event={"ID":"ce4027ee-3ff6-4f48-8eee-cac190eac5f9","Type":"ContainerStarted","Data":"3046637b3ae6f8a64d9c828ae6ac98aeecc7be6a75f34fb75bf1989e62a29d28"} Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.074183 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.447534 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" podStartSLOduration=23.049472361 podStartE2EDuration="50.447510493s" podCreationTimestamp="2026-01-21 14:47:43 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.259020394 +0000 UTC m=+1013.233369449" lastFinishedPulling="2026-01-21 14:48:14.657058536 +0000 UTC m=+1040.631407581" observedRunningTime="2026-01-21 14:48:33.328297171 +0000 UTC m=+1059.302646216" watchObservedRunningTime="2026-01-21 14:48:33.447510493 +0000 UTC m=+1059.421859538" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.454086 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" podStartSLOduration=7.998724964 podStartE2EDuration="49.454064786s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.453147968 +0000 UTC m=+1013.427497013" lastFinishedPulling="2026-01-21 14:48:28.90848775 +0000 UTC m=+1054.882836835" observedRunningTime="2026-01-21 14:48:33.390483297 +0000 UTC m=+1059.364832342" watchObservedRunningTime="2026-01-21 14:48:33.454064786 +0000 UTC m=+1059.428413831" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.460847 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" podStartSLOduration=7.238634677 podStartE2EDuration="50.460826617s" podCreationTimestamp="2026-01-21 14:47:43 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.478939713 +0000 UTC m=+1013.453288758" lastFinishedPulling="2026-01-21 14:48:30.701131653 +0000 UTC m=+1056.675480698" observedRunningTime="2026-01-21 14:48:33.428010585 +0000 UTC m=+1059.402359640" watchObservedRunningTime="2026-01-21 14:48:33.460826617 +0000 UTC m=+1059.435175662" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.465554 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" podStartSLOduration=6.276284993 podStartE2EDuration="50.465541843s" podCreationTimestamp="2026-01-21 14:47:43 +0000 UTC" firstStartedPulling="2026-01-21 14:47:46.511128169 +0000 UTC m=+1012.485477214" lastFinishedPulling="2026-01-21 14:48:30.700385019 +0000 UTC m=+1056.674734064" observedRunningTime="2026-01-21 14:48:33.443437835 +0000 UTC m=+1059.417786880" watchObservedRunningTime="2026-01-21 14:48:33.465541843 +0000 UTC m=+1059.439890888" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.546992 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" podStartSLOduration=6.092961879 podStartE2EDuration="49.546961338s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.267399075 +0000 UTC m=+1013.241748120" lastFinishedPulling="2026-01-21 14:48:30.721398524 +0000 UTC m=+1056.695747579" observedRunningTime="2026-01-21 14:48:33.543736977 +0000 UTC m=+1059.518086022" watchObservedRunningTime="2026-01-21 14:48:33.546961338 +0000 UTC m=+1059.521310383" Jan 21 14:48:33 crc kubenswrapper[4834]: I0121 14:48:33.702665 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" podStartSLOduration=6.448165787 podStartE2EDuration="49.702632444s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.446128999 +0000 UTC m=+1013.420478044" lastFinishedPulling="2026-01-21 14:48:30.700595646 +0000 UTC m=+1056.674944701" observedRunningTime="2026-01-21 14:48:33.698487805 +0000 UTC m=+1059.672836870" watchObservedRunningTime="2026-01-21 14:48:33.702632444 +0000 UTC m=+1059.676981489" Jan 21 14:48:34 crc kubenswrapper[4834]: I0121 14:48:34.387193 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" podStartSLOduration=50.387171122 podStartE2EDuration="50.387171122s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:48:33.841582809 +0000 UTC m=+1059.815931854" watchObservedRunningTime="2026-01-21 14:48:34.387171122 +0000 UTC m=+1060.361520167" Jan 21 14:48:37 crc kubenswrapper[4834]: I0121 14:48:37.123635 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75bfd788c8-d4x4p" Jan 21 14:48:37 crc kubenswrapper[4834]: E0121 14:48:37.335858 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" podUID="f3592517-3af0-41fd-bd16-da41fb583656" Jan 21 14:48:40 crc kubenswrapper[4834]: E0121 14:48:40.327244 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" podUID="458dc6de-2219-419b-ab95-76a72a77a097" Jan 21 14:48:41 crc kubenswrapper[4834]: I0121 14:48:41.332941 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" event={"ID":"2097f21e-3f3a-435e-9003-15846c98efbd","Type":"ContainerStarted","Data":"9c88c2c73cf00d8b3c7ee8f1cc4060a4d6e8433afe99cd82917b38fcc88dcd12"} Jan 21 14:48:41 crc kubenswrapper[4834]: I0121 14:48:41.333147 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:48:41 crc kubenswrapper[4834]: I0121 14:48:41.337724 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" event={"ID":"8b4d1ded-fcb3-456e-9704-776079ec120f","Type":"ContainerStarted","Data":"8f468b9f318bd55f28668477d856322f2db49c512c47c977109f3d250b6f7bc8"} Jan 21 14:48:41 crc kubenswrapper[4834]: I0121 14:48:41.337879 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:48:41 crc kubenswrapper[4834]: I0121 14:48:41.426487 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" podStartSLOduration=49.035050254 podStartE2EDuration="57.426463264s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:48:31.799333038 +0000 UTC m=+1057.773682083" lastFinishedPulling="2026-01-21 14:48:40.190746058 +0000 UTC m=+1066.165095093" observedRunningTime="2026-01-21 14:48:41.410979781 +0000 UTC m=+1067.385328836" watchObservedRunningTime="2026-01-21 14:48:41.426463264 +0000 UTC m=+1067.400812309" Jan 21 14:48:41 crc kubenswrapper[4834]: I0121 14:48:41.440143 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" podStartSLOduration=49.407299773 podStartE2EDuration="58.440109468s" podCreationTimestamp="2026-01-21 14:47:43 +0000 UTC" firstStartedPulling="2026-01-21 14:48:31.148689074 +0000 UTC m=+1057.123038119" lastFinishedPulling="2026-01-21 14:48:40.181498769 +0000 UTC m=+1066.155847814" observedRunningTime="2026-01-21 14:48:41.435119623 +0000 UTC m=+1067.409468658" watchObservedRunningTime="2026-01-21 14:48:41.440109468 +0000 UTC m=+1067.414458523" Jan 21 14:48:43 crc kubenswrapper[4834]: E0121 14:48:43.326541 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" podUID="aa317ccb-4317-40f7-8661-20a62f36dd97" Jan 21 14:48:43 crc kubenswrapper[4834]: E0121 14:48:43.326553 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" podUID="941ce977-5cf7-49e2-b96d-8446fefa95cd" Jan 21 14:48:44 crc kubenswrapper[4834]: I0121 14:48:44.151714 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-v7r86" Jan 21 14:48:44 crc kubenswrapper[4834]: I0121 14:48:44.164422 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qdt7x" Jan 21 14:48:44 crc kubenswrapper[4834]: I0121 14:48:44.282162 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dfmb5" Jan 21 14:48:44 crc kubenswrapper[4834]: I0121 14:48:44.292508 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-x56p7" Jan 21 14:48:44 crc kubenswrapper[4834]: I0121 14:48:44.316804 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-2dj64" Jan 21 14:48:44 crc kubenswrapper[4834]: I0121 14:48:44.442274 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-jvfvr" Jan 21 14:48:44 crc kubenswrapper[4834]: I0121 14:48:44.632436 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-d5jml" Jan 21 14:48:44 crc kubenswrapper[4834]: I0121 14:48:44.681349 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-7ww2d" Jan 21 14:48:44 crc kubenswrapper[4834]: I0121 14:48:44.802322 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-whz74" Jan 21 14:48:45 crc kubenswrapper[4834]: I0121 14:48:45.026581 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cv9xt" Jan 21 14:48:45 crc kubenswrapper[4834]: I0121 14:48:45.029456 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-28wfl" Jan 21 14:48:45 crc kubenswrapper[4834]: I0121 14:48:45.030109 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-bdrss" Jan 21 14:48:45 crc kubenswrapper[4834]: I0121 14:48:45.039048 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-lt695" Jan 21 14:48:45 crc kubenswrapper[4834]: I0121 14:48:45.054434 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-nsvnj" Jan 21 14:48:45 crc kubenswrapper[4834]: I0121 14:48:45.190958 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-92r4m" Jan 21 14:48:45 crc kubenswrapper[4834]: E0121 14:48:45.326501 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" podUID="bca7db69-2696-4e85-96dc-7c9140549f9a" Jan 21 14:48:46 crc kubenswrapper[4834]: I0121 14:48:46.253665 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-bwn92" Jan 21 14:48:47 crc kubenswrapper[4834]: I0121 14:48:47.045350 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5b9875986ddp2nz" Jan 21 14:48:47 crc kubenswrapper[4834]: I0121 14:48:47.114176 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:48:47 crc kubenswrapper[4834]: I0121 14:48:47.114256 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:48:51 crc kubenswrapper[4834]: I0121 14:48:51.420689 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" event={"ID":"f3592517-3af0-41fd-bd16-da41fb583656","Type":"ContainerStarted","Data":"356287b8c1422517b1376e8e3e633f3de8d18d7ed8ae9c70c930bfd26390771e"} Jan 21 14:48:51 crc kubenswrapper[4834]: I0121 14:48:51.422413 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" Jan 21 14:48:51 crc kubenswrapper[4834]: I0121 14:48:51.449144 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" podStartSLOduration=4.090745936 podStartE2EDuration="1m7.449114753s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.625794332 +0000 UTC m=+1013.600143377" lastFinishedPulling="2026-01-21 14:48:50.984163149 +0000 UTC m=+1076.958512194" observedRunningTime="2026-01-21 14:48:51.437213472 +0000 UTC m=+1077.411562527" watchObservedRunningTime="2026-01-21 14:48:51.449114753 +0000 UTC m=+1077.423463818" Jan 21 14:48:53 crc kubenswrapper[4834]: I0121 14:48:53.434740 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" event={"ID":"458dc6de-2219-419b-ab95-76a72a77a097","Type":"ContainerStarted","Data":"655c411a68384a5d904bd95e4a0f9e407d92c45a821c12b411e5b687da384132"} Jan 21 14:48:53 crc kubenswrapper[4834]: I0121 14:48:53.435377 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" Jan 21 14:48:53 crc kubenswrapper[4834]: I0121 14:48:53.452784 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" podStartSLOduration=4.327287989 podStartE2EDuration="1m9.452759472s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.650674258 +0000 UTC m=+1013.625023303" lastFinishedPulling="2026-01-21 14:48:52.776145741 +0000 UTC m=+1078.750494786" observedRunningTime="2026-01-21 14:48:53.451111192 +0000 UTC m=+1079.425460237" watchObservedRunningTime="2026-01-21 14:48:53.452759472 +0000 UTC m=+1079.427108517" Jan 21 14:48:55 crc kubenswrapper[4834]: I0121 14:48:55.513571 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" event={"ID":"941ce977-5cf7-49e2-b96d-8446fefa95cd","Type":"ContainerStarted","Data":"482f2d0fe28475399feac72d071d8fcb4f8a22532e3a1c8f3ad70340b99bd43e"} Jan 21 14:48:55 crc kubenswrapper[4834]: I0121 14:48:55.514431 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" Jan 21 14:48:55 crc kubenswrapper[4834]: I0121 14:48:55.531630 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" podStartSLOduration=4.366214101 podStartE2EDuration="1m11.531605433s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.650703869 +0000 UTC m=+1013.625052914" lastFinishedPulling="2026-01-21 14:48:54.816095201 +0000 UTC m=+1080.790444246" observedRunningTime="2026-01-21 14:48:55.528725553 +0000 UTC m=+1081.503074608" watchObservedRunningTime="2026-01-21 14:48:55.531605433 +0000 UTC m=+1081.505954478" Jan 21 14:49:00 crc kubenswrapper[4834]: I0121 14:49:00.589074 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" event={"ID":"aa317ccb-4317-40f7-8661-20a62f36dd97","Type":"ContainerStarted","Data":"64aa8d443ce0c733f822f70eb9d3a14bbcf527001f868e43713233a8a448ba55"} Jan 21 14:49:00 crc kubenswrapper[4834]: I0121 14:49:00.611455 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" podStartSLOduration=4.531933282 podStartE2EDuration="1m16.611428771s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.644161386 +0000 UTC m=+1013.618510431" lastFinishedPulling="2026-01-21 14:48:59.723656875 +0000 UTC m=+1085.698005920" observedRunningTime="2026-01-21 14:49:00.610222553 +0000 UTC m=+1086.584571618" watchObservedRunningTime="2026-01-21 14:49:00.611428771 +0000 UTC m=+1086.585777816" Jan 21 14:49:01 crc kubenswrapper[4834]: I0121 14:49:01.641279 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" event={"ID":"bca7db69-2696-4e85-96dc-7c9140549f9a","Type":"ContainerStarted","Data":"a894f307dab059f9988954c5a058fea6cc1aabf001d321afeea65101a7a46543"} Jan 21 14:49:01 crc kubenswrapper[4834]: I0121 14:49:01.665394 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wnz6s" podStartSLOduration=4.487257418 podStartE2EDuration="1m17.665368418s" podCreationTimestamp="2026-01-21 14:47:44 +0000 UTC" firstStartedPulling="2026-01-21 14:47:47.617983489 +0000 UTC m=+1013.592332534" lastFinishedPulling="2026-01-21 14:49:00.796094489 +0000 UTC m=+1086.770443534" observedRunningTime="2026-01-21 14:49:01.661179118 +0000 UTC m=+1087.635528173" watchObservedRunningTime="2026-01-21 14:49:01.665368418 +0000 UTC m=+1087.639717463" Jan 21 14:49:04 crc kubenswrapper[4834]: I0121 14:49:04.910698 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" Jan 21 14:49:04 crc kubenswrapper[4834]: I0121 14:49:04.915159 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-lczqr" Jan 21 14:49:04 crc kubenswrapper[4834]: I0121 14:49:04.962953 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-qmndz" Jan 21 14:49:05 crc kubenswrapper[4834]: I0121 14:49:05.155291 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-hlcxr" Jan 21 14:49:05 crc kubenswrapper[4834]: I0121 14:49:05.286642 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-44bkf" Jan 21 14:49:17 crc kubenswrapper[4834]: I0121 14:49:17.114338 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:49:17 crc kubenswrapper[4834]: I0121 14:49:17.114979 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.809419 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dzsgs"] Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.811484 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.814037 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.814282 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.814543 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.815280 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qxkrm" Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.829764 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dzsgs"] Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.877795 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vtw9v"] Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.883700 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.887351 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.890490 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vtw9v"] Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.962946 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qlvw\" (UniqueName: \"kubernetes.io/projected/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-kube-api-access-9qlvw\") pod \"dnsmasq-dns-84bb9d8bd9-dzsgs\" (UID: \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:49:21 crc kubenswrapper[4834]: I0121 14:49:21.963044 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-config\") pod \"dnsmasq-dns-84bb9d8bd9-dzsgs\" (UID: \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.064261 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-dns-svc\") pod \"dnsmasq-dns-5f854695bc-vtw9v\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.064338 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-config\") pod \"dnsmasq-dns-5f854695bc-vtw9v\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.064384 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-config\") pod \"dnsmasq-dns-84bb9d8bd9-dzsgs\" (UID: \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.064412 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjb8\" (UniqueName: \"kubernetes.io/projected/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-kube-api-access-xwjb8\") pod \"dnsmasq-dns-5f854695bc-vtw9v\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.064481 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qlvw\" (UniqueName: \"kubernetes.io/projected/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-kube-api-access-9qlvw\") pod \"dnsmasq-dns-84bb9d8bd9-dzsgs\" (UID: \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.065698 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-config\") pod \"dnsmasq-dns-84bb9d8bd9-dzsgs\" (UID: \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.089125 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qlvw\" (UniqueName: \"kubernetes.io/projected/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-kube-api-access-9qlvw\") pod \"dnsmasq-dns-84bb9d8bd9-dzsgs\" (UID: \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.131804 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.165540 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-dns-svc\") pod \"dnsmasq-dns-5f854695bc-vtw9v\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.166028 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-config\") pod \"dnsmasq-dns-5f854695bc-vtw9v\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.166092 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwjb8\" (UniqueName: \"kubernetes.io/projected/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-kube-api-access-xwjb8\") pod \"dnsmasq-dns-5f854695bc-vtw9v\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.167210 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-dns-svc\") pod \"dnsmasq-dns-5f854695bc-vtw9v\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.167221 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-config\") pod \"dnsmasq-dns-5f854695bc-vtw9v\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.186827 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwjb8\" (UniqueName: \"kubernetes.io/projected/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-kube-api-access-xwjb8\") pod \"dnsmasq-dns-5f854695bc-vtw9v\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.200088 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.765577 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dzsgs"] Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.820022 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" event={"ID":"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0","Type":"ContainerStarted","Data":"18f5f05a2f61202bca9adc8e77f8d799e6319e9f0602d6300c72a61079428550"} Jan 21 14:49:22 crc kubenswrapper[4834]: W0121 14:49:22.845491 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41ef6e99_69b9_4797_8dfd_1a3fc69b2299.slice/crio-6395964044b55e3865481f5c7a0d3f2fa94ba960f969d94eed7f742e6193457d WatchSource:0}: Error finding container 6395964044b55e3865481f5c7a0d3f2fa94ba960f969d94eed7f742e6193457d: Status 404 returned error can't find the container with id 6395964044b55e3865481f5c7a0d3f2fa94ba960f969d94eed7f742e6193457d Jan 21 14:49:22 crc kubenswrapper[4834]: I0121 14:49:22.846781 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vtw9v"] Jan 21 14:49:23 crc kubenswrapper[4834]: I0121 14:49:23.788134 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vtw9v"] Jan 21 14:49:23 crc kubenswrapper[4834]: I0121 14:49:23.802092 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-2wwzm"] Jan 21 14:49:23 crc kubenswrapper[4834]: I0121 14:49:23.805251 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:23 crc kubenswrapper[4834]: I0121 14:49:23.848131 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-2wwzm"] Jan 21 14:49:23 crc kubenswrapper[4834]: I0121 14:49:23.860028 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" event={"ID":"41ef6e99-69b9-4797-8dfd-1a3fc69b2299","Type":"ContainerStarted","Data":"6395964044b55e3865481f5c7a0d3f2fa94ba960f969d94eed7f742e6193457d"} Jan 21 14:49:23 crc kubenswrapper[4834]: I0121 14:49:23.908463 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqvfl\" (UniqueName: \"kubernetes.io/projected/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-kube-api-access-nqvfl\") pod \"dnsmasq-dns-744ffd65bc-2wwzm\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:23 crc kubenswrapper[4834]: I0121 14:49:23.908582 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-config\") pod \"dnsmasq-dns-744ffd65bc-2wwzm\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:23 crc kubenswrapper[4834]: I0121 14:49:23.908619 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-2wwzm\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.009752 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqvfl\" (UniqueName: \"kubernetes.io/projected/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-kube-api-access-nqvfl\") pod \"dnsmasq-dns-744ffd65bc-2wwzm\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.009869 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-config\") pod \"dnsmasq-dns-744ffd65bc-2wwzm\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.009899 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-2wwzm\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.010984 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-2wwzm\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.011178 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-config\") pod \"dnsmasq-dns-744ffd65bc-2wwzm\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.035897 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqvfl\" (UniqueName: \"kubernetes.io/projected/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-kube-api-access-nqvfl\") pod \"dnsmasq-dns-744ffd65bc-2wwzm\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.161671 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.675057 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dzsgs"] Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.714460 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-v4rc7"] Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.716739 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.723943 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-v4rc7"] Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.799000 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-dns-svc\") pod \"dnsmasq-dns-95f5f6995-v4rc7\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.799051 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpbdm\" (UniqueName: \"kubernetes.io/projected/b6cd141c-8b13-4ca7-babf-a9ad4db32357-kube-api-access-zpbdm\") pod \"dnsmasq-dns-95f5f6995-v4rc7\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.799138 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-config\") pod \"dnsmasq-dns-95f5f6995-v4rc7\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.901506 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-config\") pod \"dnsmasq-dns-95f5f6995-v4rc7\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.901580 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-dns-svc\") pod \"dnsmasq-dns-95f5f6995-v4rc7\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.901598 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpbdm\" (UniqueName: \"kubernetes.io/projected/b6cd141c-8b13-4ca7-babf-a9ad4db32357-kube-api-access-zpbdm\") pod \"dnsmasq-dns-95f5f6995-v4rc7\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.902901 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-config\") pod \"dnsmasq-dns-95f5f6995-v4rc7\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.903172 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-dns-svc\") pod \"dnsmasq-dns-95f5f6995-v4rc7\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.909744 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-2wwzm"] Jan 21 14:49:24 crc kubenswrapper[4834]: I0121 14:49:24.925377 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpbdm\" (UniqueName: \"kubernetes.io/projected/b6cd141c-8b13-4ca7-babf-a9ad4db32357-kube-api-access-zpbdm\") pod \"dnsmasq-dns-95f5f6995-v4rc7\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:24 crc kubenswrapper[4834]: W0121 14:49:24.957031 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4d9deb4_e311_4ff4_87f8_f7c23742e4a5.slice/crio-e9d781ee0d656649785f36d619d39dacc27ddf106eac01eb3f6ad4548e10f5c6 WatchSource:0}: Error finding container e9d781ee0d656649785f36d619d39dacc27ddf106eac01eb3f6ad4548e10f5c6: Status 404 returned error can't find the container with id e9d781ee0d656649785f36d619d39dacc27ddf106eac01eb3f6ad4548e10f5c6 Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.039566 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.041606 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.050388 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.050597 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.051519 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.053907 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.054393 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-vxrkr" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.054490 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.054569 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.054644 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.054684 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.205501 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9714a2-fadf-48a3-8b71-07d7419cc713-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.205863 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.205900 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscbv\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-kube-api-access-kscbv\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.205957 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.205991 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.206017 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.206049 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.206080 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9714a2-fadf-48a3-8b71-07d7419cc713-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.206111 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.206156 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.206192 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307350 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kscbv\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-kube-api-access-kscbv\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307417 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307444 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307467 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307498 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307525 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9714a2-fadf-48a3-8b71-07d7419cc713-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307541 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307576 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307606 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307629 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9714a2-fadf-48a3-8b71-07d7419cc713-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.307649 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.309042 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.309116 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.309677 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.310039 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.311263 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.312005 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.317606 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9714a2-fadf-48a3-8b71-07d7419cc713-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.317793 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.319204 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9714a2-fadf-48a3-8b71-07d7419cc713-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.322477 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.327914 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kscbv\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-kube-api-access-kscbv\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.349988 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.371851 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.671615 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-v4rc7"] Jan 21 14:49:25 crc kubenswrapper[4834]: W0121 14:49:25.712441 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6cd141c_8b13_4ca7_babf_a9ad4db32357.slice/crio-4f7d4fb6bca0c6eb788e4ba5837ca914137657082efc51aa5e8daecd517b943f WatchSource:0}: Error finding container 4f7d4fb6bca0c6eb788e4ba5837ca914137657082efc51aa5e8daecd517b943f: Status 404 returned error can't find the container with id 4f7d4fb6bca0c6eb788e4ba5837ca914137657082efc51aa5e8daecd517b943f Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.870004 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.873225 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.877315 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.877454 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.877473 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.878087 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jhw8c" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.878319 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.878463 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.878571 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.882041 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.936918 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" event={"ID":"b6cd141c-8b13-4ca7-babf-a9ad4db32357","Type":"ContainerStarted","Data":"4f7d4fb6bca0c6eb788e4ba5837ca914137657082efc51aa5e8daecd517b943f"} Jan 21 14:49:25 crc kubenswrapper[4834]: I0121 14:49:25.938999 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" event={"ID":"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5","Type":"ContainerStarted","Data":"e9d781ee0d656649785f36d619d39dacc27ddf106eac01eb3f6ad4548e10f5c6"} Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.021919 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.022225 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.022334 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.022388 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.022579 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.022622 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b87b73b4-2715-4ce7-81b3-df0c1f57922f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.022678 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b87b73b4-2715-4ce7-81b3-df0c1f57922f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.022707 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmhw\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-kube-api-access-pdmhw\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.022799 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.022983 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.023016 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.060294 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.124952 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125364 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125416 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125459 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125485 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125514 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125543 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125565 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b87b73b4-2715-4ce7-81b3-df0c1f57922f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125593 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b87b73b4-2715-4ce7-81b3-df0c1f57922f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125616 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmhw\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-kube-api-access-pdmhw\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125663 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.125750 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.126035 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.126688 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.126843 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.127218 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.127233 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.130591 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b87b73b4-2715-4ce7-81b3-df0c1f57922f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.131257 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b87b73b4-2715-4ce7-81b3-df0c1f57922f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.132655 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.140566 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.147414 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmhw\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-kube-api-access-pdmhw\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.153496 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.205083 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.884141 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.889894 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.903140 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.903774 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.904356 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.904748 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qdqqv" Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.905733 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:49:26 crc kubenswrapper[4834]: I0121 14:49:26.910836 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.225498 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-default\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.225610 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.225638 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.225677 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.225707 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpj8l\" (UniqueName: \"kubernetes.io/projected/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kube-api-access-qpj8l\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.225738 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.226133 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.226288 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kolla-config\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.254046 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df9714a2-fadf-48a3-8b71-07d7419cc713","Type":"ContainerStarted","Data":"b7dcbb4926c68d815faaad4fb3c4b21e147ae2566d0155d55e83e153105eca2c"} Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.327630 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.328247 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.329608 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.329660 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpj8l\" (UniqueName: \"kubernetes.io/projected/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kube-api-access-qpj8l\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.329692 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.329745 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.329805 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kolla-config\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.329853 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-default\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.330919 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-default\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.328399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.331292 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.334394 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.334489 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kolla-config\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.350991 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.356152 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.364008 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpj8l\" (UniqueName: \"kubernetes.io/projected/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kube-api-access-qpj8l\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.366011 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " pod="openstack/openstack-galera-0" Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.372518 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:49:27 crc kubenswrapper[4834]: I0121 14:49:27.545436 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.268752 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b87b73b4-2715-4ce7-81b3-df0c1f57922f","Type":"ContainerStarted","Data":"61069d554ec42a69dcd479133fe906a0e908eeae4c7bb9655bfadcc270ae69ac"} Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.389549 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.391212 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.396942 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hnj47" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.397575 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.397234 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.397301 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.409606 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.530043 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.530108 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.530151 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.530169 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.530210 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvtx\" (UniqueName: \"kubernetes.io/projected/73b312a8-0dee-488f-b998-4653b1cce8be-kube-api-access-zzvtx\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.530231 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.530259 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.530279 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.602569 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.603644 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.612319 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.612606 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hjqnt" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.612727 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.658095 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.669458 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.669571 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.669678 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.669713 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.669833 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzvtx\" (UniqueName: \"kubernetes.io/projected/73b312a8-0dee-488f-b998-4653b1cce8be-kube-api-access-zzvtx\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.669884 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.669951 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.669991 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.674213 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.674920 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.675205 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.676124 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.679887 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.687197 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.693642 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.708122 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzvtx\" (UniqueName: \"kubernetes.io/projected/73b312a8-0dee-488f-b998-4653b1cce8be-kube-api-access-zzvtx\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.725855 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.743134 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:28 crc kubenswrapper[4834]: W0121 14:49:28.756722 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8ae24ac_0d2e_4d6d_9417_9f3f7f8081ee.slice/crio-9d324abad33ce58ea0c970d989daf4ba3bed4727ecba0770e465e2101e1f5904 WatchSource:0}: Error finding container 9d324abad33ce58ea0c970d989daf4ba3bed4727ecba0770e465e2101e1f5904: Status 404 returned error can't find the container with id 9d324abad33ce58ea0c970d989daf4ba3bed4727ecba0770e465e2101e1f5904 Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.786549 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-memcached-tls-certs\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.786636 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-config-data\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.786671 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-combined-ca-bundle\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.786715 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9cnj\" (UniqueName: \"kubernetes.io/projected/234831ee-247b-40ae-9c71-db9d7b45d275-kube-api-access-w9cnj\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.786775 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-kolla-config\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.891084 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-memcached-tls-certs\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.891180 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-config-data\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.891223 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-combined-ca-bundle\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.891265 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9cnj\" (UniqueName: \"kubernetes.io/projected/234831ee-247b-40ae-9c71-db9d7b45d275-kube-api-access-w9cnj\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.891365 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-kolla-config\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.892340 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-kolla-config\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.896378 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-config-data\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.971971 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-memcached-tls-certs\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.984915 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9cnj\" (UniqueName: \"kubernetes.io/projected/234831ee-247b-40ae-9c71-db9d7b45d275-kube-api-access-w9cnj\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:28 crc kubenswrapper[4834]: I0121 14:49:28.992087 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-combined-ca-bundle\") pod \"memcached-0\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " pod="openstack/memcached-0" Jan 21 14:49:29 crc kubenswrapper[4834]: I0121 14:49:29.039347 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:49:29 crc kubenswrapper[4834]: I0121 14:49:29.339009 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:49:29 crc kubenswrapper[4834]: I0121 14:49:29.380048 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee","Type":"ContainerStarted","Data":"9d324abad33ce58ea0c970d989daf4ba3bed4727ecba0770e465e2101e1f5904"} Jan 21 14:49:30 crc kubenswrapper[4834]: I0121 14:49:30.004211 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:49:30 crc kubenswrapper[4834]: I0121 14:49:30.447428 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 14:49:30 crc kubenswrapper[4834]: I0121 14:49:30.457260 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"73b312a8-0dee-488f-b998-4653b1cce8be","Type":"ContainerStarted","Data":"4d11fa71a06c241757e6a30a100815e378367ba7c41bcd344fba0cc251dfe7fc"} Jan 21 14:49:30 crc kubenswrapper[4834]: W0121 14:49:30.500433 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod234831ee_247b_40ae_9c71_db9d7b45d275.slice/crio-5e1ea54b693a0b10291bc0fa799ff6cbc4ae898f048ce874887e0d166af4ae30 WatchSource:0}: Error finding container 5e1ea54b693a0b10291bc0fa799ff6cbc4ae898f048ce874887e0d166af4ae30: Status 404 returned error can't find the container with id 5e1ea54b693a0b10291bc0fa799ff6cbc4ae898f048ce874887e0d166af4ae30 Jan 21 14:49:31 crc kubenswrapper[4834]: I0121 14:49:31.469383 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"234831ee-247b-40ae-9c71-db9d7b45d275","Type":"ContainerStarted","Data":"5e1ea54b693a0b10291bc0fa799ff6cbc4ae898f048ce874887e0d166af4ae30"} Jan 21 14:49:31 crc kubenswrapper[4834]: I0121 14:49:31.558614 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:49:31 crc kubenswrapper[4834]: I0121 14:49:31.560384 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:49:31 crc kubenswrapper[4834]: I0121 14:49:31.563965 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-9fjw7" Jan 21 14:49:31 crc kubenswrapper[4834]: I0121 14:49:31.585199 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:49:31 crc kubenswrapper[4834]: I0121 14:49:31.670716 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsc5\" (UniqueName: \"kubernetes.io/projected/61736716-9721-48ac-9318-c2ceca59af62-kube-api-access-vxsc5\") pod \"kube-state-metrics-0\" (UID: \"61736716-9721-48ac-9318-c2ceca59af62\") " pod="openstack/kube-state-metrics-0" Jan 21 14:49:31 crc kubenswrapper[4834]: I0121 14:49:31.774431 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxsc5\" (UniqueName: \"kubernetes.io/projected/61736716-9721-48ac-9318-c2ceca59af62-kube-api-access-vxsc5\") pod \"kube-state-metrics-0\" (UID: \"61736716-9721-48ac-9318-c2ceca59af62\") " pod="openstack/kube-state-metrics-0" Jan 21 14:49:31 crc kubenswrapper[4834]: I0121 14:49:31.871782 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxsc5\" (UniqueName: \"kubernetes.io/projected/61736716-9721-48ac-9318-c2ceca59af62-kube-api-access-vxsc5\") pod \"kube-state-metrics-0\" (UID: \"61736716-9721-48ac-9318-c2ceca59af62\") " pod="openstack/kube-state-metrics-0" Jan 21 14:49:31 crc kubenswrapper[4834]: I0121 14:49:31.962130 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:49:33 crc kubenswrapper[4834]: I0121 14:49:33.398129 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:49:33 crc kubenswrapper[4834]: I0121 14:49:33.617695 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"61736716-9721-48ac-9318-c2ceca59af62","Type":"ContainerStarted","Data":"7bfe46248fca29485426d66d22f71f52764ed9a558e53bb794f157f83633a42e"} Jan 21 14:49:34 crc kubenswrapper[4834]: I0121 14:49:34.894364 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9wtcs"] Jan 21 14:49:34 crc kubenswrapper[4834]: I0121 14:49:34.895640 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:34 crc kubenswrapper[4834]: I0121 14:49:34.902773 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-snh6h" Jan 21 14:49:34 crc kubenswrapper[4834]: I0121 14:49:34.904049 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 14:49:34 crc kubenswrapper[4834]: I0121 14:49:34.904239 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 14:49:34 crc kubenswrapper[4834]: I0121 14:49:34.918138 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wtcs"] Jan 21 14:49:34 crc kubenswrapper[4834]: I0121 14:49:34.973240 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ztq6r"] Jan 21 14:49:34 crc kubenswrapper[4834]: I0121 14:49:34.975354 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:34 crc kubenswrapper[4834]: I0121 14:49:34.987025 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ztq6r"] Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.054667 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-combined-ca-bundle\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.054737 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-log-ovn\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.071377 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efc9a766-6bb5-4585-881f-019c2f33f096-scripts\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.071688 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run-ovn\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.071853 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-ovn-controller-tls-certs\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.072213 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.072692 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vj46\" (UniqueName: \"kubernetes.io/projected/efc9a766-6bb5-4585-881f-019c2f33f096-kube-api-access-6vj46\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.180985 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181067 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-etc-ovs\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181122 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f229152-a987-497e-8777-937b4f6880d0-scripts\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181158 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-run\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181190 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9qtf\" (UniqueName: \"kubernetes.io/projected/5f229152-a987-497e-8777-937b4f6880d0-kube-api-access-x9qtf\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181224 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vj46\" (UniqueName: \"kubernetes.io/projected/efc9a766-6bb5-4585-881f-019c2f33f096-kube-api-access-6vj46\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181266 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-combined-ca-bundle\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181288 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-lib\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181313 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-log-ovn\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181368 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-log\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181400 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efc9a766-6bb5-4585-881f-019c2f33f096-scripts\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181432 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run-ovn\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.181458 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-ovn-controller-tls-certs\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.182440 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.182640 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-log-ovn\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.199842 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-combined-ca-bundle\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.202591 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efc9a766-6bb5-4585-881f-019c2f33f096-scripts\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.202765 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run-ovn\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.218459 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-ovn-controller-tls-certs\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.230840 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vj46\" (UniqueName: \"kubernetes.io/projected/efc9a766-6bb5-4585-881f-019c2f33f096-kube-api-access-6vj46\") pod \"ovn-controller-9wtcs\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.241472 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wtcs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.284125 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-etc-ovs\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.284436 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f229152-a987-497e-8777-937b4f6880d0-scripts\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.284464 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-run\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.284482 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9qtf\" (UniqueName: \"kubernetes.io/projected/5f229152-a987-497e-8777-937b4f6880d0-kube-api-access-x9qtf\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.284520 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-lib\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.284563 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-log\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.284783 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-log\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.284937 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-etc-ovs\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.286754 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f229152-a987-497e-8777-937b4f6880d0-scripts\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.286813 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-run\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.287591 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-lib\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.350551 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9qtf\" (UniqueName: \"kubernetes.io/projected/5f229152-a987-497e-8777-937b4f6880d0-kube-api-access-x9qtf\") pod \"ovn-controller-ovs-ztq6r\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.396836 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.399853 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.402269 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.404219 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-t2c79" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.404648 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.404840 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.404969 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.408725 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.591212 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.591277 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.591339 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.591360 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-config\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.591383 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhc9d\" (UniqueName: \"kubernetes.io/projected/972527b7-5fbf-4cb1-9495-155dd778bba6-kube-api-access-qhc9d\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.591399 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.591442 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.591471 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.599032 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.695289 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.695428 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.695477 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.695548 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.695573 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-config\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.695607 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhc9d\" (UniqueName: \"kubernetes.io/projected/972527b7-5fbf-4cb1-9495-155dd778bba6-kube-api-access-qhc9d\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.695636 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.695761 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.697782 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.699045 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-config\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.699733 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.702389 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.704452 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.705071 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.706856 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.722578 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:35 crc kubenswrapper[4834]: I0121 14:49:35.783216 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhc9d\" (UniqueName: \"kubernetes.io/projected/972527b7-5fbf-4cb1-9495-155dd778bba6-kube-api-access-qhc9d\") pod \"ovsdbserver-nb-0\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:36 crc kubenswrapper[4834]: I0121 14:49:36.075372 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:49:36 crc kubenswrapper[4834]: I0121 14:49:36.507032 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wtcs"] Jan 21 14:49:36 crc kubenswrapper[4834]: W0121 14:49:36.547853 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefc9a766_6bb5_4585_881f_019c2f33f096.slice/crio-af63ec048c9e60d0adfb5efdeb1a771503d3b79608bba7b4a2e324dcca3954fe WatchSource:0}: Error finding container af63ec048c9e60d0adfb5efdeb1a771503d3b79608bba7b4a2e324dcca3954fe: Status 404 returned error can't find the container with id af63ec048c9e60d0adfb5efdeb1a771503d3b79608bba7b4a2e324dcca3954fe Jan 21 14:49:36 crc kubenswrapper[4834]: I0121 14:49:36.821465 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wtcs" event={"ID":"efc9a766-6bb5-4585-881f-019c2f33f096","Type":"ContainerStarted","Data":"af63ec048c9e60d0adfb5efdeb1a771503d3b79608bba7b4a2e324dcca3954fe"} Jan 21 14:49:36 crc kubenswrapper[4834]: I0121 14:49:36.845180 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ztq6r"] Jan 21 14:49:36 crc kubenswrapper[4834]: W0121 14:49:36.882341 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f229152_a987_497e_8777_937b4f6880d0.slice/crio-0b06e71e7b2f8158a53dec122fb2883c511b0fc8f3e7f6e5e914762326d14f17 WatchSource:0}: Error finding container 0b06e71e7b2f8158a53dec122fb2883c511b0fc8f3e7f6e5e914762326d14f17: Status 404 returned error can't find the container with id 0b06e71e7b2f8158a53dec122fb2883c511b0fc8f3e7f6e5e914762326d14f17 Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.165456 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xhsvz"] Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.167123 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.174138 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.178145 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xhsvz"] Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.195518 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbmt\" (UniqueName: \"kubernetes.io/projected/04a45f24-7164-403c-954f-5ff46c148c5a-kube-api-access-nlbmt\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.195683 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.195716 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovn-rundir\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.195749 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovs-rundir\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.195796 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a45f24-7164-403c-954f-5ff46c148c5a-config\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.195852 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-combined-ca-bundle\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.314054 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlbmt\" (UniqueName: \"kubernetes.io/projected/04a45f24-7164-403c-954f-5ff46c148c5a-kube-api-access-nlbmt\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.314105 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.314133 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovn-rundir\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.314185 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovs-rundir\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.314240 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a45f24-7164-403c-954f-5ff46c148c5a-config\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.314297 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-combined-ca-bundle\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.316958 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovn-rundir\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.317040 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovs-rundir\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.317600 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a45f24-7164-403c-954f-5ff46c148c5a-config\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.326690 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-combined-ca-bundle\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.333588 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.352217 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlbmt\" (UniqueName: \"kubernetes.io/projected/04a45f24-7164-403c-954f-5ff46c148c5a-kube-api-access-nlbmt\") pod \"ovn-controller-metrics-xhsvz\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.391029 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.538474 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.554141 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-2wwzm"] Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.579937 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7878659675-p25vl"] Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.584978 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.588248 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.594060 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-p25vl"] Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.737700 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.737914 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gqcn\" (UniqueName: \"kubernetes.io/projected/5b0ba3bb-e346-4168-8be9-bf9e70d13121-kube-api-access-7gqcn\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.738284 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-config\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.738369 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-dns-svc\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.842653 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-config\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.842747 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-dns-svc\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.842840 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.843072 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gqcn\" (UniqueName: \"kubernetes.io/projected/5b0ba3bb-e346-4168-8be9-bf9e70d13121-kube-api-access-7gqcn\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.844177 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-config\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.844320 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-dns-svc\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.845211 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.865979 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gqcn\" (UniqueName: \"kubernetes.io/projected/5b0ba3bb-e346-4168-8be9-bf9e70d13121-kube-api-access-7gqcn\") pod \"dnsmasq-dns-7878659675-p25vl\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:37 crc kubenswrapper[4834]: I0121 14:49:37.905608 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztq6r" event={"ID":"5f229152-a987-497e-8777-937b4f6880d0","Type":"ContainerStarted","Data":"0b06e71e7b2f8158a53dec122fb2883c511b0fc8f3e7f6e5e914762326d14f17"} Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:37.956344 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.314266 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.315692 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.319022 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.319132 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ztvxj" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.319202 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.319352 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.469916 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.472869 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.473067 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.473166 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpvzs\" (UniqueName: \"kubernetes.io/projected/869db5eb-b0d3-407e-a28b-1d23b27a0299-kube-api-access-bpvzs\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.473205 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.473236 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.473267 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.473304 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-config\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.473337 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.574687 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.574769 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.574824 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpvzs\" (UniqueName: \"kubernetes.io/projected/869db5eb-b0d3-407e-a28b-1d23b27a0299-kube-api-access-bpvzs\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.574850 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.574868 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.574887 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.574910 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-config\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.574955 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.575344 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.596286 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.597670 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.601200 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.605116 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.605438 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.609806 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.610462 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-config\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.618780 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpvzs\" (UniqueName: \"kubernetes.io/projected/869db5eb-b0d3-407e-a28b-1d23b27a0299-kube-api-access-bpvzs\") pod \"ovsdbserver-sb-0\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:38 crc kubenswrapper[4834]: I0121 14:49:38.726370 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:49:47 crc kubenswrapper[4834]: I0121 14:49:47.113698 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:49:47 crc kubenswrapper[4834]: I0121 14:49:47.114387 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:49:47 crc kubenswrapper[4834]: I0121 14:49:47.114442 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:49:47 crc kubenswrapper[4834]: I0121 14:49:47.115454 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c24a029b19e48e3cf137efc10e8a62368e95d300f53e096d99dddcd0c8a5d0a8"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:49:47 crc kubenswrapper[4834]: I0121 14:49:47.115514 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://c24a029b19e48e3cf137efc10e8a62368e95d300f53e096d99dddcd0c8a5d0a8" gracePeriod=600 Jan 21 14:49:48 crc kubenswrapper[4834]: I0121 14:49:48.030095 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="c24a029b19e48e3cf137efc10e8a62368e95d300f53e096d99dddcd0c8a5d0a8" exitCode=0 Jan 21 14:49:48 crc kubenswrapper[4834]: I0121 14:49:48.030137 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"c24a029b19e48e3cf137efc10e8a62368e95d300f53e096d99dddcd0c8a5d0a8"} Jan 21 14:49:48 crc kubenswrapper[4834]: I0121 14:49:48.030472 4834 scope.go:117] "RemoveContainer" containerID="c6a0e2c89db9c973dfdd15d51e7113160968bb3b5a4f9316daef39ec270ba9ad" Jan 21 14:49:48 crc kubenswrapper[4834]: W0121 14:49:48.900375 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod972527b7_5fbf_4cb1_9495_155dd778bba6.slice/crio-05523ee29bbaa4f86d7bd548893b96ad95e1068fa23a45ec488633673a70db02 WatchSource:0}: Error finding container 05523ee29bbaa4f86d7bd548893b96ad95e1068fa23a45ec488633673a70db02: Status 404 returned error can't find the container with id 05523ee29bbaa4f86d7bd548893b96ad95e1068fa23a45ec488633673a70db02 Jan 21 14:49:49 crc kubenswrapper[4834]: I0121 14:49:49.037810 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"972527b7-5fbf-4cb1-9495-155dd778bba6","Type":"ContainerStarted","Data":"05523ee29bbaa4f86d7bd548893b96ad95e1068fa23a45ec488633673a70db02"} Jan 21 14:49:50 crc kubenswrapper[4834]: E0121 14:49:50.168031 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 21 14:49:50 crc kubenswrapper[4834]: E0121 14:49:50.168583 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5f8h5b8h675h9dh596h68ch57hf9h5chf6hdh9h55fh5fch89h85h644hbch686h566h678hc6h99hd4h5dfhc7h685hd8h658hb6h595hbfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9cnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(234831ee-247b-40ae-9c71-db9d7b45d275): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:50 crc kubenswrapper[4834]: E0121 14:49:50.169837 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="234831ee-247b-40ae-9c71-db9d7b45d275" Jan 21 14:49:51 crc kubenswrapper[4834]: E0121 14:49:51.054851 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="234831ee-247b-40ae-9c71-db9d7b45d275" Jan 21 14:49:58 crc kubenswrapper[4834]: E0121 14:49:58.586182 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 21 14:49:58 crc kubenswrapper[4834]: E0121 14:49:58.586990 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdmhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(b87b73b4-2715-4ce7-81b3-df0c1f57922f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:58 crc kubenswrapper[4834]: E0121 14:49:58.588511 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" Jan 21 14:49:59 crc kubenswrapper[4834]: E0121 14:49:59.161493 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 21 14:49:59 crc kubenswrapper[4834]: E0121 14:49:59.161656 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpj8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:49:59 crc kubenswrapper[4834]: E0121 14:49:59.163674 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" Jan 21 14:49:59 crc kubenswrapper[4834]: E0121 14:49:59.247624 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" Jan 21 14:49:59 crc kubenswrapper[4834]: E0121 14:49:59.248013 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" Jan 21 14:50:04 crc kubenswrapper[4834]: E0121 14:50:04.149254 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 21 14:50:04 crc kubenswrapper[4834]: E0121 14:50:04.149559 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kscbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(df9714a2-fadf-48a3-8b71-07d7419cc713): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:50:04 crc kubenswrapper[4834]: E0121 14:50:04.150836 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" Jan 21 14:50:04 crc kubenswrapper[4834]: E0121 14:50:04.291251 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" Jan 21 14:50:05 crc kubenswrapper[4834]: E0121 14:50:05.405861 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3" Jan 21 14:50:05 crc kubenswrapper[4834]: E0121 14:50:05.406108 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n548h5b6h689hfdh555hf8h5c5h5f5h9h696h578h68ch95hc8h687h5f7h569h596h64bhc5h669h58bh5dh7ch89h5f5h66dh5d7h68fh6ch5ch586q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9qtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-ztq6r_openstack(5f229152-a987-497e-8777-937b4f6880d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:50:05 crc kubenswrapper[4834]: E0121 14:50:05.407315 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.161294 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.161697 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwjb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-vtw9v_openstack(41ef6e99-69b9-4797-8dfd-1a3fc69b2299): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.163466 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" podUID="41ef6e99-69b9-4797-8dfd-1a3fc69b2299" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.182061 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.182219 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqvfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-2wwzm_openstack(f4d9deb4-e311-4ff4-87f8-f7c23742e4a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.183451 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" podUID="f4d9deb4-e311-4ff4-87f8-f7c23742e4a5" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.196774 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.197012 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qlvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-dzsgs_openstack(1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.198561 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" podUID="1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.207463 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.207648 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpbdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-v4rc7_openstack(b6cd141c-8b13-4ca7-babf-a9ad4db32357): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.209454 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" podUID="b6cd141c-8b13-4ca7-babf-a9ad4db32357" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.302302 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" podUID="b6cd141c-8b13-4ca7-babf-a9ad4db32357" Jan 21 14:50:06 crc kubenswrapper[4834]: E0121 14:50:06.302426 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3\\\"\"" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.165213 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.173155 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.184128 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.266074 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-config\") pod \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\" (UID: \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\") " Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.266608 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-dns-svc\") pod \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.266653 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qlvw\" (UniqueName: \"kubernetes.io/projected/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-kube-api-access-9qlvw\") pod \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\" (UID: \"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0\") " Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.266691 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqvfl\" (UniqueName: \"kubernetes.io/projected/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-kube-api-access-nqvfl\") pod \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.266799 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-config\") pod \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\" (UID: \"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5\") " Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.267271 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4d9deb4-e311-4ff4-87f8-f7c23742e4a5" (UID: "f4d9deb4-e311-4ff4-87f8-f7c23742e4a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.267651 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-config" (OuterVolumeSpecName: "config") pod "f4d9deb4-e311-4ff4-87f8-f7c23742e4a5" (UID: "f4d9deb4-e311-4ff4-87f8-f7c23742e4a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.268218 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-config" (OuterVolumeSpecName: "config") pod "1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0" (UID: "1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.274550 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-kube-api-access-9qlvw" (OuterVolumeSpecName: "kube-api-access-9qlvw") pod "1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0" (UID: "1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0"). InnerVolumeSpecName "kube-api-access-9qlvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.275184 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-kube-api-access-nqvfl" (OuterVolumeSpecName: "kube-api-access-nqvfl") pod "f4d9deb4-e311-4ff4-87f8-f7c23742e4a5" (UID: "f4d9deb4-e311-4ff4-87f8-f7c23742e4a5"). InnerVolumeSpecName "kube-api-access-nqvfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.316691 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" event={"ID":"1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0","Type":"ContainerDied","Data":"18f5f05a2f61202bca9adc8e77f8d799e6319e9f0602d6300c72a61079428550"} Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.316811 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dzsgs" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.322721 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-p25vl"] Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.327192 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" event={"ID":"41ef6e99-69b9-4797-8dfd-1a3fc69b2299","Type":"ContainerDied","Data":"6395964044b55e3865481f5c7a0d3f2fa94ba960f969d94eed7f742e6193457d"} Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.327255 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-vtw9v" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.330025 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" event={"ID":"f4d9deb4-e311-4ff4-87f8-f7c23742e4a5","Type":"ContainerDied","Data":"e9d781ee0d656649785f36d619d39dacc27ddf106eac01eb3f6ad4548e10f5c6"} Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.330088 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-2wwzm" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.336054 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"6dba53c679c40632f6791fadae8f2aac4acf2c5613e03fc58ea292250e912986"} Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.369407 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwjb8\" (UniqueName: \"kubernetes.io/projected/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-kube-api-access-xwjb8\") pod \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.369458 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-dns-svc\") pod \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.369626 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-config\") pod \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\" (UID: \"41ef6e99-69b9-4797-8dfd-1a3fc69b2299\") " Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.370220 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41ef6e99-69b9-4797-8dfd-1a3fc69b2299" (UID: "41ef6e99-69b9-4797-8dfd-1a3fc69b2299"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.370436 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-config" (OuterVolumeSpecName: "config") pod "41ef6e99-69b9-4797-8dfd-1a3fc69b2299" (UID: "41ef6e99-69b9-4797-8dfd-1a3fc69b2299"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.370612 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.370630 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.370640 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.370650 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qlvw\" (UniqueName: \"kubernetes.io/projected/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0-kube-api-access-9qlvw\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.370659 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqvfl\" (UniqueName: \"kubernetes.io/projected/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-kube-api-access-nqvfl\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.370668 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.387012 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-kube-api-access-xwjb8" (OuterVolumeSpecName: "kube-api-access-xwjb8") pod "41ef6e99-69b9-4797-8dfd-1a3fc69b2299" (UID: "41ef6e99-69b9-4797-8dfd-1a3fc69b2299"). InnerVolumeSpecName "kube-api-access-xwjb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.443760 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-2wwzm"] Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.453137 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-2wwzm"] Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.472794 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.472836 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwjb8\" (UniqueName: \"kubernetes.io/projected/41ef6e99-69b9-4797-8dfd-1a3fc69b2299-kube-api-access-xwjb8\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.478347 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xhsvz"] Jan 21 14:50:07 crc kubenswrapper[4834]: W0121 14:50:07.486733 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b0ba3bb_e346_4168_8be9_bf9e70d13121.slice/crio-3f5d5c01dcca833e8d0e03659421d020989bbbf7a1d286b34886afd771849038 WatchSource:0}: Error finding container 3f5d5c01dcca833e8d0e03659421d020989bbbf7a1d286b34886afd771849038: Status 404 returned error can't find the container with id 3f5d5c01dcca833e8d0e03659421d020989bbbf7a1d286b34886afd771849038 Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.493035 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dzsgs"] Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.501708 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dzsgs"] Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.506416 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.700328 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vtw9v"] Jan 21 14:50:07 crc kubenswrapper[4834]: I0121 14:50:07.708761 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-vtw9v"] Jan 21 14:50:07 crc kubenswrapper[4834]: E0121 14:50:07.948188 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 21 14:50:07 crc kubenswrapper[4834]: E0121 14:50:07.948245 4834 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 21 14:50:07 crc kubenswrapper[4834]: E0121 14:50:07.948425 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxsc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(61736716-9721-48ac-9318-c2ceca59af62): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:50:07 crc kubenswrapper[4834]: E0121 14:50:07.949573 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="61736716-9721-48ac-9318-c2ceca59af62" Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.334650 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0" path="/var/lib/kubelet/pods/1723fdc6-8c7a-46e9-a46f-20b7d94a3ca0/volumes" Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.335465 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ef6e99-69b9-4797-8dfd-1a3fc69b2299" path="/var/lib/kubelet/pods/41ef6e99-69b9-4797-8dfd-1a3fc69b2299/volumes" Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.335855 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d9deb4-e311-4ff4-87f8-f7c23742e4a5" path="/var/lib/kubelet/pods/f4d9deb4-e311-4ff4-87f8-f7c23742e4a5/volumes" Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.343424 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xhsvz" event={"ID":"04a45f24-7164-403c-954f-5ff46c148c5a","Type":"ContainerStarted","Data":"7316f0b465ba7302445a100be55e5fdd0ce2f5a8a584dd0aec6aa55a6003dedb"} Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.345143 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wtcs" event={"ID":"efc9a766-6bb5-4585-881f-019c2f33f096","Type":"ContainerStarted","Data":"5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e"} Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.345354 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9wtcs" Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.345899 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-p25vl" event={"ID":"5b0ba3bb-e346-4168-8be9-bf9e70d13121","Type":"ContainerStarted","Data":"3f5d5c01dcca833e8d0e03659421d020989bbbf7a1d286b34886afd771849038"} Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.346994 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"972527b7-5fbf-4cb1-9495-155dd778bba6","Type":"ContainerStarted","Data":"3e35deec45ae37578385bbd09aa3545fddecd1f4ec1244e186609259be21731f"} Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.348168 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"869db5eb-b0d3-407e-a28b-1d23b27a0299","Type":"ContainerStarted","Data":"d4b3c82b0732fc02916a85c35db89df6aa5ff66a9feedd6cbc3955253d6ce79b"} Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.349488 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"73b312a8-0dee-488f-b998-4653b1cce8be","Type":"ContainerStarted","Data":"88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a"} Jan 21 14:50:08 crc kubenswrapper[4834]: E0121 14:50:08.351002 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="61736716-9721-48ac-9318-c2ceca59af62" Jan 21 14:50:08 crc kubenswrapper[4834]: I0121 14:50:08.372893 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9wtcs" podStartSLOduration=3.4668570020000002 podStartE2EDuration="34.372868565s" podCreationTimestamp="2026-01-21 14:49:34 +0000 UTC" firstStartedPulling="2026-01-21 14:49:36.570284434 +0000 UTC m=+1122.544633479" lastFinishedPulling="2026-01-21 14:50:07.476295997 +0000 UTC m=+1153.450645042" observedRunningTime="2026-01-21 14:50:08.36535307 +0000 UTC m=+1154.339702125" watchObservedRunningTime="2026-01-21 14:50:08.372868565 +0000 UTC m=+1154.347217620" Jan 21 14:50:09 crc kubenswrapper[4834]: I0121 14:50:09.359130 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"234831ee-247b-40ae-9c71-db9d7b45d275","Type":"ContainerStarted","Data":"7d56f8927a0e745e7f1e135ef6228427a467fa5229a8d9c26e035ff6e772686c"} Jan 21 14:50:09 crc kubenswrapper[4834]: I0121 14:50:09.360023 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 14:50:09 crc kubenswrapper[4834]: I0121 14:50:09.362569 4834 generic.go:334] "Generic (PLEG): container finished" podID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerID="9e9ee4c0dcff9375647a5f8eea0138a24498408929d944c90862a27e60ce0d5c" exitCode=0 Jan 21 14:50:09 crc kubenswrapper[4834]: I0121 14:50:09.362651 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-p25vl" event={"ID":"5b0ba3bb-e346-4168-8be9-bf9e70d13121","Type":"ContainerDied","Data":"9e9ee4c0dcff9375647a5f8eea0138a24498408929d944c90862a27e60ce0d5c"} Jan 21 14:50:09 crc kubenswrapper[4834]: I0121 14:50:09.387003 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.105021866 podStartE2EDuration="41.386957198s" podCreationTimestamp="2026-01-21 14:49:28 +0000 UTC" firstStartedPulling="2026-01-21 14:49:30.504982995 +0000 UTC m=+1116.479332040" lastFinishedPulling="2026-01-21 14:50:08.786918327 +0000 UTC m=+1154.761267372" observedRunningTime="2026-01-21 14:50:09.384252733 +0000 UTC m=+1155.358601798" watchObservedRunningTime="2026-01-21 14:50:09.386957198 +0000 UTC m=+1155.361306243" Jan 21 14:50:10 crc kubenswrapper[4834]: I0121 14:50:10.373312 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"869db5eb-b0d3-407e-a28b-1d23b27a0299","Type":"ContainerStarted","Data":"fc16a0399ebec76c3711f257960965a75e78472dd08cb4f571245e6cd9a2da01"} Jan 21 14:50:10 crc kubenswrapper[4834]: I0121 14:50:10.375743 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-p25vl" event={"ID":"5b0ba3bb-e346-4168-8be9-bf9e70d13121","Type":"ContainerStarted","Data":"72d073516b45715316a88246e07be967efa676f3ddeb2bea78d1200fd1e85031"} Jan 21 14:50:10 crc kubenswrapper[4834]: I0121 14:50:10.403562 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7878659675-p25vl" podStartSLOduration=32.110637141 podStartE2EDuration="33.403536159s" podCreationTimestamp="2026-01-21 14:49:37 +0000 UTC" firstStartedPulling="2026-01-21 14:50:07.492740089 +0000 UTC m=+1153.467089134" lastFinishedPulling="2026-01-21 14:50:08.785639107 +0000 UTC m=+1154.759988152" observedRunningTime="2026-01-21 14:50:10.395949652 +0000 UTC m=+1156.370298707" watchObservedRunningTime="2026-01-21 14:50:10.403536159 +0000 UTC m=+1156.377885214" Jan 21 14:50:11 crc kubenswrapper[4834]: I0121 14:50:11.383310 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.394487 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"869db5eb-b0d3-407e-a28b-1d23b27a0299","Type":"ContainerStarted","Data":"1a272bf3dc5a9407cead8d9f3ebf4fd78348c18107084c4a2acf29f42c67cd6a"} Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.396890 4834 generic.go:334] "Generic (PLEG): container finished" podID="73b312a8-0dee-488f-b998-4653b1cce8be" containerID="88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a" exitCode=0 Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.396995 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"73b312a8-0dee-488f-b998-4653b1cce8be","Type":"ContainerDied","Data":"88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a"} Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.400521 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xhsvz" event={"ID":"04a45f24-7164-403c-954f-5ff46c148c5a","Type":"ContainerStarted","Data":"e3bb8d41e0b523f5b37a3567db174269b4e1271905c76b1d4dc36d52baf6dbf3"} Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.402143 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee","Type":"ContainerStarted","Data":"a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2"} Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.404101 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"972527b7-5fbf-4cb1-9495-155dd778bba6","Type":"ContainerStarted","Data":"a73cfecc848bf12c70aa981c92c2bea59b70ce49b289daf7e8fd02a4758aca8a"} Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.420619 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=31.183564772 podStartE2EDuration="35.420564398s" podCreationTimestamp="2026-01-21 14:49:37 +0000 UTC" firstStartedPulling="2026-01-21 14:50:07.486741572 +0000 UTC m=+1153.461090617" lastFinishedPulling="2026-01-21 14:50:11.723741198 +0000 UTC m=+1157.698090243" observedRunningTime="2026-01-21 14:50:12.420270738 +0000 UTC m=+1158.394619783" watchObservedRunningTime="2026-01-21 14:50:12.420564398 +0000 UTC m=+1158.394913443" Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.465910 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xhsvz" podStartSLOduration=31.267686425 podStartE2EDuration="35.465891401s" podCreationTimestamp="2026-01-21 14:49:37 +0000 UTC" firstStartedPulling="2026-01-21 14:50:07.494647529 +0000 UTC m=+1153.468996574" lastFinishedPulling="2026-01-21 14:50:11.692852495 +0000 UTC m=+1157.667201550" observedRunningTime="2026-01-21 14:50:12.464517958 +0000 UTC m=+1158.438867013" watchObservedRunningTime="2026-01-21 14:50:12.465891401 +0000 UTC m=+1158.440240446" Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.492342 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.674058197 podStartE2EDuration="38.492314415s" podCreationTimestamp="2026-01-21 14:49:34 +0000 UTC" firstStartedPulling="2026-01-21 14:49:48.903022284 +0000 UTC m=+1134.877371329" lastFinishedPulling="2026-01-21 14:50:11.721278492 +0000 UTC m=+1157.695627547" observedRunningTime="2026-01-21 14:50:12.486684389 +0000 UTC m=+1158.461033454" watchObservedRunningTime="2026-01-21 14:50:12.492314415 +0000 UTC m=+1158.466663460" Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.907102 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-v4rc7"] Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.997994 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-k6hpp"] Jan 21 14:50:12 crc kubenswrapper[4834]: I0121 14:50:12.999611 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.004046 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.022238 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-k6hpp"] Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.317939 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k677\" (UniqueName: \"kubernetes.io/projected/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-kube-api-access-2k677\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.318412 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.318518 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-dns-svc\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.318606 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-config\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.318714 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.421863 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k677\" (UniqueName: \"kubernetes.io/projected/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-kube-api-access-2k677\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.422020 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.422669 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b87b73b4-2715-4ce7-81b3-df0c1f57922f","Type":"ContainerStarted","Data":"a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c"} Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.423617 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.424120 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-dns-svc\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.424269 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-config\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.425174 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.425897 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-config\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.428026 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"73b312a8-0dee-488f-b998-4653b1cce8be","Type":"ContainerStarted","Data":"939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99"} Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.429124 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.429660 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-dns-svc\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.449105 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k677\" (UniqueName: \"kubernetes.io/projected/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-kube-api-access-2k677\") pod \"dnsmasq-dns-586b989cdc-k6hpp\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.489311 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.410818285 podStartE2EDuration="46.489290064s" podCreationTimestamp="2026-01-21 14:49:27 +0000 UTC" firstStartedPulling="2026-01-21 14:49:30.039322054 +0000 UTC m=+1116.013671089" lastFinishedPulling="2026-01-21 14:50:06.117793823 +0000 UTC m=+1152.092142868" observedRunningTime="2026-01-21 14:50:13.481382968 +0000 UTC m=+1159.455732013" watchObservedRunningTime="2026-01-21 14:50:13.489290064 +0000 UTC m=+1159.463639109" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.503694 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.612481 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.728301 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.738873 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-dns-svc\") pod \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.739109 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpbdm\" (UniqueName: \"kubernetes.io/projected/b6cd141c-8b13-4ca7-babf-a9ad4db32357-kube-api-access-zpbdm\") pod \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.739253 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-config\") pod \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\" (UID: \"b6cd141c-8b13-4ca7-babf-a9ad4db32357\") " Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.740631 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6cd141c-8b13-4ca7-babf-a9ad4db32357" (UID: "b6cd141c-8b13-4ca7-babf-a9ad4db32357"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.741806 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-config" (OuterVolumeSpecName: "config") pod "b6cd141c-8b13-4ca7-babf-a9ad4db32357" (UID: "b6cd141c-8b13-4ca7-babf-a9ad4db32357"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.757563 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd141c-8b13-4ca7-babf-a9ad4db32357-kube-api-access-zpbdm" (OuterVolumeSpecName: "kube-api-access-zpbdm") pod "b6cd141c-8b13-4ca7-babf-a9ad4db32357" (UID: "b6cd141c-8b13-4ca7-babf-a9ad4db32357"). InnerVolumeSpecName "kube-api-access-zpbdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.784396 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-k6hpp"] Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.854950 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.854993 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd141c-8b13-4ca7-babf-a9ad4db32357-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:13 crc kubenswrapper[4834]: I0121 14:50:13.855008 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpbdm\" (UniqueName: \"kubernetes.io/projected/b6cd141c-8b13-4ca7-babf-a9ad4db32357-kube-api-access-zpbdm\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.342100 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.437505 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" event={"ID":"b6cd141c-8b13-4ca7-babf-a9ad4db32357","Type":"ContainerDied","Data":"4f7d4fb6bca0c6eb788e4ba5837ca914137657082efc51aa5e8daecd517b943f"} Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.437563 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-v4rc7" Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.440015 4834 generic.go:334] "Generic (PLEG): container finished" podID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" containerID="9422ca64496ff498b34b45598653a1e1f590e2fb557b280975e2d35fb7c94c61" exitCode=0 Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.440100 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" event={"ID":"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c","Type":"ContainerDied","Data":"9422ca64496ff498b34b45598653a1e1f590e2fb557b280975e2d35fb7c94c61"} Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.440126 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" event={"ID":"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c","Type":"ContainerStarted","Data":"b5b5f028fdb6503aae80349176e885495a3e82fcbe2771f0ae7b6e202008af32"} Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.526861 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-v4rc7"] Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.536740 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-v4rc7"] Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.727848 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 14:50:14 crc kubenswrapper[4834]: I0121 14:50:14.770408 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 14:50:15 crc kubenswrapper[4834]: I0121 14:50:15.077470 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 14:50:15 crc kubenswrapper[4834]: I0121 14:50:15.120484 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 14:50:15 crc kubenswrapper[4834]: I0121 14:50:15.449399 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" event={"ID":"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c","Type":"ContainerStarted","Data":"9c1d3de00e08d81b78a124f5d8d4d21ca577eba935e29ea2a55006702c1fab2b"} Jan 21 14:50:15 crc kubenswrapper[4834]: I0121 14:50:15.450224 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 14:50:15 crc kubenswrapper[4834]: I0121 14:50:15.489653 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" podStartSLOduration=3.4896183020000002 podStartE2EDuration="3.489618302s" podCreationTimestamp="2026-01-21 14:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:50:15.480729815 +0000 UTC m=+1161.455078870" watchObservedRunningTime="2026-01-21 14:50:15.489618302 +0000 UTC m=+1161.463967347" Jan 21 14:50:15 crc kubenswrapper[4834]: I0121 14:50:15.507332 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 14:50:15 crc kubenswrapper[4834]: I0121 14:50:15.531679 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.013103 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.014622 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.022878 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.023210 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.024272 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.027460 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wxnpr" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.045046 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.095440 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-config\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.095522 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.095553 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-scripts\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.095574 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.096213 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wsp9\" (UniqueName: \"kubernetes.io/projected/afa0d119-4c43-4161-8e43-94de0b186cb8-kube-api-access-4wsp9\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.096292 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.096320 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.198185 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.198554 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-scripts\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.198581 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.198635 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wsp9\" (UniqueName: \"kubernetes.io/projected/afa0d119-4c43-4161-8e43-94de0b186cb8-kube-api-access-4wsp9\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.198677 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.198706 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.198783 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-config\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.199284 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.199800 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-config\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.200019 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-scripts\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.208691 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.208989 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.218966 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.222782 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wsp9\" (UniqueName: \"kubernetes.io/projected/afa0d119-4c43-4161-8e43-94de0b186cb8-kube-api-access-4wsp9\") pod \"ovn-northd-0\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.335243 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd141c-8b13-4ca7-babf-a9ad4db32357" path="/var/lib/kubelet/pods/b6cd141c-8b13-4ca7-babf-a9ad4db32357/volumes" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.353401 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.460415 4834 generic.go:334] "Generic (PLEG): container finished" podID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" containerID="a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2" exitCode=0 Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.461213 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee","Type":"ContainerDied","Data":"a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2"} Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.461542 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:16 crc kubenswrapper[4834]: I0121 14:50:16.852639 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:50:16 crc kubenswrapper[4834]: W0121 14:50:16.861400 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafa0d119_4c43_4161_8e43_94de0b186cb8.slice/crio-7e0e235525d2749bdc2e23b8babceed11902ea21da5b237e3fd84eca239eb1a5 WatchSource:0}: Error finding container 7e0e235525d2749bdc2e23b8babceed11902ea21da5b237e3fd84eca239eb1a5: Status 404 returned error can't find the container with id 7e0e235525d2749bdc2e23b8babceed11902ea21da5b237e3fd84eca239eb1a5 Jan 21 14:50:17 crc kubenswrapper[4834]: I0121 14:50:17.470022 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"afa0d119-4c43-4161-8e43-94de0b186cb8","Type":"ContainerStarted","Data":"7e0e235525d2749bdc2e23b8babceed11902ea21da5b237e3fd84eca239eb1a5"} Jan 21 14:50:17 crc kubenswrapper[4834]: I0121 14:50:17.472449 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee","Type":"ContainerStarted","Data":"c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e"} Jan 21 14:50:17 crc kubenswrapper[4834]: I0121 14:50:17.506137 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371984.34866 podStartE2EDuration="52.506115794s" podCreationTimestamp="2026-01-21 14:49:25 +0000 UTC" firstStartedPulling="2026-01-21 14:49:28.772064976 +0000 UTC m=+1114.746414021" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:50:17.500683704 +0000 UTC m=+1163.475032749" watchObservedRunningTime="2026-01-21 14:50:17.506115794 +0000 UTC m=+1163.480464829" Jan 21 14:50:17 crc kubenswrapper[4834]: I0121 14:50:17.546216 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 14:50:17 crc kubenswrapper[4834]: I0121 14:50:17.546264 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 14:50:17 crc kubenswrapper[4834]: I0121 14:50:17.959686 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:50:18 crc kubenswrapper[4834]: I0121 14:50:18.480215 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"afa0d119-4c43-4161-8e43-94de0b186cb8","Type":"ContainerStarted","Data":"65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464"} Jan 21 14:50:19 crc kubenswrapper[4834]: I0121 14:50:19.040320 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 14:50:19 crc kubenswrapper[4834]: I0121 14:50:19.040405 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 14:50:19 crc kubenswrapper[4834]: I0121 14:50:19.491193 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"afa0d119-4c43-4161-8e43-94de0b186cb8","Type":"ContainerStarted","Data":"fa01f639bc91875c2d0bcb559a386c138421ca21854b35458ce844de676ca39a"} Jan 21 14:50:19 crc kubenswrapper[4834]: I0121 14:50:19.491362 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 14:50:19 crc kubenswrapper[4834]: I0121 14:50:19.492843 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df9714a2-fadf-48a3-8b71-07d7419cc713","Type":"ContainerStarted","Data":"8018ed8fca11c93bbc50ad4d89fee33fca796f9f20da5b7258ee155a5c1edde0"} Jan 21 14:50:19 crc kubenswrapper[4834]: I0121 14:50:19.517569 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.517288117 podStartE2EDuration="4.517542028s" podCreationTimestamp="2026-01-21 14:50:15 +0000 UTC" firstStartedPulling="2026-01-21 14:50:16.864011271 +0000 UTC m=+1162.838360316" lastFinishedPulling="2026-01-21 14:50:17.864265182 +0000 UTC m=+1163.838614227" observedRunningTime="2026-01-21 14:50:19.50768219 +0000 UTC m=+1165.482031235" watchObservedRunningTime="2026-01-21 14:50:19.517542028 +0000 UTC m=+1165.491891073" Jan 21 14:50:21 crc kubenswrapper[4834]: I0121 14:50:21.320542 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 14:50:21 crc kubenswrapper[4834]: I0121 14:50:21.411210 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 14:50:21 crc kubenswrapper[4834]: I0121 14:50:21.516489 4834 generic.go:334] "Generic (PLEG): container finished" podID="5f229152-a987-497e-8777-937b4f6880d0" containerID="e670dfc9b5b6fa7161d55bec337b50e6a0762c64b164d107b62dcae1c0aacfd9" exitCode=0 Jan 21 14:50:21 crc kubenswrapper[4834]: I0121 14:50:21.517677 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztq6r" event={"ID":"5f229152-a987-497e-8777-937b4f6880d0","Type":"ContainerDied","Data":"e670dfc9b5b6fa7161d55bec337b50e6a0762c64b164d107b62dcae1c0aacfd9"} Jan 21 14:50:21 crc kubenswrapper[4834]: I0121 14:50:21.951367 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-k6hpp"] Jan 21 14:50:21 crc kubenswrapper[4834]: I0121 14:50:21.951993 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" podUID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" containerName="dnsmasq-dns" containerID="cri-o://9c1d3de00e08d81b78a124f5d8d4d21ca577eba935e29ea2a55006702c1fab2b" gracePeriod=10 Jan 21 14:50:21 crc kubenswrapper[4834]: I0121 14:50:21.953119 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.028502 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-7jmh9"] Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.030347 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.044243 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-7jmh9"] Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.122645 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.123140 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.123223 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-config\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.123252 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.123304 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnr6f\" (UniqueName: \"kubernetes.io/projected/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-kube-api-access-cnr6f\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.225523 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.225592 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.225623 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-config\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.225655 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnr6f\" (UniqueName: \"kubernetes.io/projected/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-kube-api-access-cnr6f\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.225703 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.226553 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.226715 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.226960 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.226998 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-config\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.247485 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnr6f\" (UniqueName: \"kubernetes.io/projected/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-kube-api-access-cnr6f\") pod \"dnsmasq-dns-67fdf7998c-7jmh9\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.351953 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:22 crc kubenswrapper[4834]: I0121 14:50:22.799496 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-7jmh9"] Jan 21 14:50:22 crc kubenswrapper[4834]: W0121 14:50:22.802556 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dcd0de6_69da_45fb_8d4b_d4e94e087ec0.slice/crio-9387ec890c9977b7f5099fc5fa3a9061fc4b1fff5a80fce28f0e67e5bd8b60c0 WatchSource:0}: Error finding container 9387ec890c9977b7f5099fc5fa3a9061fc4b1fff5a80fce28f0e67e5bd8b60c0: Status 404 returned error can't find the container with id 9387ec890c9977b7f5099fc5fa3a9061fc4b1fff5a80fce28f0e67e5bd8b60c0 Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.176045 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.181778 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.184554 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.184602 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.184862 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dw7xc" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.189608 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.200413 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.344584 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.344717 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6wtd\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-kube-api-access-r6wtd\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.344776 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.344808 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-cache\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.344987 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-lock\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.446471 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6wtd\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-kube-api-access-r6wtd\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.446568 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.446611 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-cache\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.446640 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-lock\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.446677 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: E0121 14:50:23.446868 4834 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:50:23 crc kubenswrapper[4834]: E0121 14:50:23.446896 4834 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:50:23 crc kubenswrapper[4834]: E0121 14:50:23.446973 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift podName:835da3fd-0497-4072-9d76-122d19300787 nodeName:}" failed. No retries permitted until 2026-01-21 14:50:23.946951173 +0000 UTC m=+1169.921300218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift") pod "swift-storage-0" (UID: "835da3fd-0497-4072-9d76-122d19300787") : configmap "swift-ring-files" not found Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.447139 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.447486 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-cache\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.447547 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-lock\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.473189 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6wtd\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-kube-api-access-r6wtd\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.474521 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.504764 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" podUID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.536961 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" event={"ID":"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0","Type":"ContainerStarted","Data":"9387ec890c9977b7f5099fc5fa3a9061fc4b1fff5a80fce28f0e67e5bd8b60c0"} Jan 21 14:50:23 crc kubenswrapper[4834]: I0121 14:50:23.955127 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:23 crc kubenswrapper[4834]: E0121 14:50:23.955197 4834 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:50:23 crc kubenswrapper[4834]: E0121 14:50:23.955567 4834 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:50:23 crc kubenswrapper[4834]: E0121 14:50:23.955627 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift podName:835da3fd-0497-4072-9d76-122d19300787 nodeName:}" failed. No retries permitted until 2026-01-21 14:50:24.955608208 +0000 UTC m=+1170.929957253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift") pod "swift-storage-0" (UID: "835da3fd-0497-4072-9d76-122d19300787") : configmap "swift-ring-files" not found Jan 21 14:50:24 crc kubenswrapper[4834]: I0121 14:50:24.545956 4834 generic.go:334] "Generic (PLEG): container finished" podID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" containerID="9c1d3de00e08d81b78a124f5d8d4d21ca577eba935e29ea2a55006702c1fab2b" exitCode=0 Jan 21 14:50:24 crc kubenswrapper[4834]: I0121 14:50:24.546004 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" event={"ID":"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c","Type":"ContainerDied","Data":"9c1d3de00e08d81b78a124f5d8d4d21ca577eba935e29ea2a55006702c1fab2b"} Jan 21 14:50:24 crc kubenswrapper[4834]: I0121 14:50:24.971109 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:24 crc kubenswrapper[4834]: E0121 14:50:24.971337 4834 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:50:24 crc kubenswrapper[4834]: E0121 14:50:24.971375 4834 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:50:24 crc kubenswrapper[4834]: E0121 14:50:24.971442 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift podName:835da3fd-0497-4072-9d76-122d19300787 nodeName:}" failed. No retries permitted until 2026-01-21 14:50:26.971423042 +0000 UTC m=+1172.945772087 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift") pod "swift-storage-0" (UID: "835da3fd-0497-4072-9d76-122d19300787") : configmap "swift-ring-files" not found Jan 21 14:50:25 crc kubenswrapper[4834]: I0121 14:50:25.556353 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztq6r" event={"ID":"5f229152-a987-497e-8777-937b4f6880d0","Type":"ContainerStarted","Data":"71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c"} Jan 21 14:50:25 crc kubenswrapper[4834]: I0121 14:50:25.561175 4834 generic.go:334] "Generic (PLEG): container finished" podID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerID="d7cdc666bae62349672c35da02cb6f15df3fee55134ef5d0109b0e0e46dba9f0" exitCode=0 Jan 21 14:50:25 crc kubenswrapper[4834]: I0121 14:50:25.561217 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" event={"ID":"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0","Type":"ContainerDied","Data":"d7cdc666bae62349672c35da02cb6f15df3fee55134ef5d0109b0e0e46dba9f0"} Jan 21 14:50:25 crc kubenswrapper[4834]: I0121 14:50:25.814254 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 14:50:25 crc kubenswrapper[4834]: I0121 14:50:25.926990 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 14:50:25 crc kubenswrapper[4834]: I0121 14:50:25.959394 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.104543 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-config\") pod \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.104978 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-dns-svc\") pod \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.105013 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-sb\") pod \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.105064 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-nb\") pod \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.105091 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k677\" (UniqueName: \"kubernetes.io/projected/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-kube-api-access-2k677\") pod \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\" (UID: \"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c\") " Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.112265 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-kube-api-access-2k677" (OuterVolumeSpecName: "kube-api-access-2k677") pod "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" (UID: "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c"). InnerVolumeSpecName "kube-api-access-2k677". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.166411 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" (UID: "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.167035 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-config" (OuterVolumeSpecName: "config") pod "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" (UID: "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.170545 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" (UID: "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.178111 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" (UID: "f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.207020 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.207053 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.207065 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.207076 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k677\" (UniqueName: \"kubernetes.io/projected/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-kube-api-access-2k677\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.207085 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.571908 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztq6r" event={"ID":"5f229152-a987-497e-8777-937b4f6880d0","Type":"ContainerStarted","Data":"bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3"} Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.572146 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.575538 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" event={"ID":"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0","Type":"ContainerStarted","Data":"d547a8c911f1175d0404c262957df9c6eeae92858f611075b4c9e2c5d067e632"} Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.575729 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.577963 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" event={"ID":"f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c","Type":"ContainerDied","Data":"b5b5f028fdb6503aae80349176e885495a3e82fcbe2771f0ae7b6e202008af32"} Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.578010 4834 scope.go:117] "RemoveContainer" containerID="9c1d3de00e08d81b78a124f5d8d4d21ca577eba935e29ea2a55006702c1fab2b" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.577971 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-k6hpp" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.581409 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"61736716-9721-48ac-9318-c2ceca59af62","Type":"ContainerStarted","Data":"d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236"} Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.581889 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.602010 4834 scope.go:117] "RemoveContainer" containerID="9422ca64496ff498b34b45598653a1e1f590e2fb557b280975e2d35fb7c94c61" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.611089 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ztq6r" podStartSLOduration=9.208592752 podStartE2EDuration="52.611059971s" podCreationTimestamp="2026-01-21 14:49:34 +0000 UTC" firstStartedPulling="2026-01-21 14:49:36.901270435 +0000 UTC m=+1122.875619480" lastFinishedPulling="2026-01-21 14:50:20.303737654 +0000 UTC m=+1166.278086699" observedRunningTime="2026-01-21 14:50:26.600651138 +0000 UTC m=+1172.575000183" watchObservedRunningTime="2026-01-21 14:50:26.611059971 +0000 UTC m=+1172.585409016" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.640899 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.522551168 podStartE2EDuration="55.640858608s" podCreationTimestamp="2026-01-21 14:49:31 +0000 UTC" firstStartedPulling="2026-01-21 14:49:33.427582363 +0000 UTC m=+1119.401931408" lastFinishedPulling="2026-01-21 14:50:25.545889803 +0000 UTC m=+1171.520238848" observedRunningTime="2026-01-21 14:50:26.634387537 +0000 UTC m=+1172.608736592" watchObservedRunningTime="2026-01-21 14:50:26.640858608 +0000 UTC m=+1172.615207653" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.656497 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" podStartSLOduration=5.656476594 podStartE2EDuration="5.656476594s" podCreationTimestamp="2026-01-21 14:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:50:26.651845319 +0000 UTC m=+1172.626194364" watchObservedRunningTime="2026-01-21 14:50:26.656476594 +0000 UTC m=+1172.630825639" Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.676257 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-k6hpp"] Jan 21 14:50:26 crc kubenswrapper[4834]: I0121 14:50:26.686790 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-k6hpp"] Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.056334 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:27 crc kubenswrapper[4834]: E0121 14:50:27.056813 4834 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:50:27 crc kubenswrapper[4834]: E0121 14:50:27.056850 4834 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:50:27 crc kubenswrapper[4834]: E0121 14:50:27.056939 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift podName:835da3fd-0497-4072-9d76-122d19300787 nodeName:}" failed. No retries permitted until 2026-01-21 14:50:31.056900884 +0000 UTC m=+1177.031249929 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift") pod "swift-storage-0" (UID: "835da3fd-0497-4072-9d76-122d19300787") : configmap "swift-ring-files" not found Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.437295 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4nq5q"] Jan 21 14:50:27 crc kubenswrapper[4834]: E0121 14:50:27.437750 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" containerName="init" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.437773 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" containerName="init" Jan 21 14:50:27 crc kubenswrapper[4834]: E0121 14:50:27.437822 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" containerName="dnsmasq-dns" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.437833 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" containerName="dnsmasq-dns" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.438051 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" containerName="dnsmasq-dns" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.438746 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.440586 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.440978 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.441789 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.448087 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l5mkc"] Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.449203 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.451909 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.463290 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4nq5q"] Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.479278 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l5mkc"] Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.879078 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-combined-ca-bundle\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.879138 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-scripts\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.879171 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-swiftconf\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.879192 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-ring-data-devices\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.879228 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-operator-scripts\") pod \"root-account-create-update-l5mkc\" (UID: \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\") " pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.879263 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e4069528-b187-472b-a3b0-fa87693b4626-etc-swift\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.879280 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6666n\" (UniqueName: \"kubernetes.io/projected/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-kube-api-access-6666n\") pod \"root-account-create-update-l5mkc\" (UID: \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\") " pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.879299 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7csz\" (UniqueName: \"kubernetes.io/projected/e4069528-b187-472b-a3b0-fa87693b4626-kube-api-access-v7csz\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.879356 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-dispersionconf\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.889912 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.980959 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-dispersionconf\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.981074 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-combined-ca-bundle\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.981134 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-scripts\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.981161 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-swiftconf\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.981215 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-ring-data-devices\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.981261 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-operator-scripts\") pod \"root-account-create-update-l5mkc\" (UID: \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\") " pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.981291 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e4069528-b187-472b-a3b0-fa87693b4626-etc-swift\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.981311 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6666n\" (UniqueName: \"kubernetes.io/projected/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-kube-api-access-6666n\") pod \"root-account-create-update-l5mkc\" (UID: \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\") " pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.981341 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7csz\" (UniqueName: \"kubernetes.io/projected/e4069528-b187-472b-a3b0-fa87693b4626-kube-api-access-v7csz\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.982458 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e4069528-b187-472b-a3b0-fa87693b4626-etc-swift\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.982844 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-scripts\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.983139 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-ring-data-devices\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.983292 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-operator-scripts\") pod \"root-account-create-update-l5mkc\" (UID: \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\") " pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.987527 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-swiftconf\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.988339 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-dispersionconf\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:27 crc kubenswrapper[4834]: I0121 14:50:27.988505 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-combined-ca-bundle\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.006587 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6666n\" (UniqueName: \"kubernetes.io/projected/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-kube-api-access-6666n\") pod \"root-account-create-update-l5mkc\" (UID: \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\") " pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.009250 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7csz\" (UniqueName: \"kubernetes.io/projected/e4069528-b187-472b-a3b0-fa87693b4626-kube-api-access-v7csz\") pod \"swift-ring-rebalance-4nq5q\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.059370 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.072033 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.547169 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c" path="/var/lib/kubelet/pods/f4fcdfd5-a5f1-4cd5-8716-00da1133cb1c/volumes" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.611402 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-59fnb"] Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.612808 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.637320 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-59fnb"] Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.799012 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-operator-scripts\") pod \"keystone-db-create-59fnb\" (UID: \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\") " pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.799297 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wtm\" (UniqueName: \"kubernetes.io/projected/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-kube-api-access-k2wtm\") pod \"keystone-db-create-59fnb\" (UID: \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\") " pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.939772 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-operator-scripts\") pod \"keystone-db-create-59fnb\" (UID: \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\") " pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.939833 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2wtm\" (UniqueName: \"kubernetes.io/projected/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-kube-api-access-k2wtm\") pod \"keystone-db-create-59fnb\" (UID: \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\") " pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.941559 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-operator-scripts\") pod \"keystone-db-create-59fnb\" (UID: \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\") " pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.992189 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-23dc-account-create-update-w6z88"] Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.993704 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.994607 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2wtm\" (UniqueName: \"kubernetes.io/projected/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-kube-api-access-k2wtm\") pod \"keystone-db-create-59fnb\" (UID: \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\") " pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:28 crc kubenswrapper[4834]: I0121 14:50:28.996633 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.026562 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cv77m"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.028017 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cv77m" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.044840 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbkq7\" (UniqueName: \"kubernetes.io/projected/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-kube-api-access-gbkq7\") pod \"placement-db-create-cv77m\" (UID: \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\") " pod="openstack/placement-db-create-cv77m" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.044892 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7q7h\" (UniqueName: \"kubernetes.io/projected/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-kube-api-access-x7q7h\") pod \"keystone-23dc-account-create-update-w6z88\" (UID: \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\") " pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.044973 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-operator-scripts\") pod \"keystone-23dc-account-create-update-w6z88\" (UID: \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\") " pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.045029 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-operator-scripts\") pod \"placement-db-create-cv77m\" (UID: \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\") " pod="openstack/placement-db-create-cv77m" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.055023 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-23dc-account-create-update-w6z88"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.075732 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cv77m"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.105237 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4nq5q"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.128673 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6ed4-account-create-update-nplb4"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.130807 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.136216 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6ed4-account-create-update-nplb4"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.146621 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7q7h\" (UniqueName: \"kubernetes.io/projected/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-kube-api-access-x7q7h\") pod \"keystone-23dc-account-create-update-w6z88\" (UID: \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\") " pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.146699 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-operator-scripts\") pod \"keystone-23dc-account-create-update-w6z88\" (UID: \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\") " pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.146760 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a2958c1-85bd-45a4-962f-5af74b8b2896-operator-scripts\") pod \"placement-6ed4-account-create-update-nplb4\" (UID: \"3a2958c1-85bd-45a4-962f-5af74b8b2896\") " pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.146789 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-operator-scripts\") pod \"placement-db-create-cv77m\" (UID: \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\") " pod="openstack/placement-db-create-cv77m" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.146866 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbkq7\" (UniqueName: \"kubernetes.io/projected/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-kube-api-access-gbkq7\") pod \"placement-db-create-cv77m\" (UID: \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\") " pod="openstack/placement-db-create-cv77m" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.146885 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crwds\" (UniqueName: \"kubernetes.io/projected/3a2958c1-85bd-45a4-962f-5af74b8b2896-kube-api-access-crwds\") pod \"placement-6ed4-account-create-update-nplb4\" (UID: \"3a2958c1-85bd-45a4-962f-5af74b8b2896\") " pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.148001 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-operator-scripts\") pod \"keystone-23dc-account-create-update-w6z88\" (UID: \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\") " pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.148686 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-operator-scripts\") pod \"placement-db-create-cv77m\" (UID: \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\") " pod="openstack/placement-db-create-cv77m" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.166856 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.181108 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7q7h\" (UniqueName: \"kubernetes.io/projected/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-kube-api-access-x7q7h\") pod \"keystone-23dc-account-create-update-w6z88\" (UID: \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\") " pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.183477 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbkq7\" (UniqueName: \"kubernetes.io/projected/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-kube-api-access-gbkq7\") pod \"placement-db-create-cv77m\" (UID: \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\") " pod="openstack/placement-db-create-cv77m" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.236499 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.251254 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crwds\" (UniqueName: \"kubernetes.io/projected/3a2958c1-85bd-45a4-962f-5af74b8b2896-kube-api-access-crwds\") pod \"placement-6ed4-account-create-update-nplb4\" (UID: \"3a2958c1-85bd-45a4-962f-5af74b8b2896\") " pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.251414 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a2958c1-85bd-45a4-962f-5af74b8b2896-operator-scripts\") pod \"placement-6ed4-account-create-update-nplb4\" (UID: \"3a2958c1-85bd-45a4-962f-5af74b8b2896\") " pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.252456 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a2958c1-85bd-45a4-962f-5af74b8b2896-operator-scripts\") pod \"placement-6ed4-account-create-update-nplb4\" (UID: \"3a2958c1-85bd-45a4-962f-5af74b8b2896\") " pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.276899 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crwds\" (UniqueName: \"kubernetes.io/projected/3a2958c1-85bd-45a4-962f-5af74b8b2896-kube-api-access-crwds\") pod \"placement-6ed4-account-create-update-nplb4\" (UID: \"3a2958c1-85bd-45a4-962f-5af74b8b2896\") " pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.357148 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l5mkc"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.377675 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.406826 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cv77m" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.483902 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.886494 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-59fnb"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.913285 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-23dc-account-create-update-w6z88"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.935314 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4nq5q" event={"ID":"e4069528-b187-472b-a3b0-fa87693b4626","Type":"ContainerStarted","Data":"6d10c6df35d9b022b50620a842c56c8d50c817c67fff88948ab54e2597408804"} Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.943113 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-59fnb" event={"ID":"bb1c174a-a1ea-4c84-a0e3-5241055f2c28","Type":"ContainerStarted","Data":"6dac7db3a22c0dd39fe8aee22772d06de770d0055697522686a6631a356520eb"} Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.946612 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cv77m"] Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.951391 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l5mkc" event={"ID":"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff","Type":"ContainerStarted","Data":"d1b97bc8f2f664d864e3d18a40d7f873a47420745e6d8e43db476eeef320f82b"} Jan 21 14:50:29 crc kubenswrapper[4834]: I0121 14:50:29.952556 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-23dc-account-create-update-w6z88" event={"ID":"a5f1661a-e972-4a56-bf7c-75e6f605a4c9","Type":"ContainerStarted","Data":"410fc619ccef6e448b7f745a48016d7354709722d316fe0b5a3adaa9d84f002b"} Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.208560 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6ed4-account-create-update-nplb4"] Jan 21 14:50:30 crc kubenswrapper[4834]: W0121 14:50:30.213135 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a2958c1_85bd_45a4_962f_5af74b8b2896.slice/crio-4ecb437f010c17e37d40c0652543b53f206d9b9aeda6a9d61851a7ab3cf57696 WatchSource:0}: Error finding container 4ecb437f010c17e37d40c0652543b53f206d9b9aeda6a9d61851a7ab3cf57696: Status 404 returned error can't find the container with id 4ecb437f010c17e37d40c0652543b53f206d9b9aeda6a9d61851a7ab3cf57696 Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.974205 4834 generic.go:334] "Generic (PLEG): container finished" podID="a5f1661a-e972-4a56-bf7c-75e6f605a4c9" containerID="e6ec65d0f480c3b4dab94f41c704198e1bbe527e959876c8f6b812b0f9ebc505" exitCode=0 Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.974371 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-23dc-account-create-update-w6z88" event={"ID":"a5f1661a-e972-4a56-bf7c-75e6f605a4c9","Type":"ContainerDied","Data":"e6ec65d0f480c3b4dab94f41c704198e1bbe527e959876c8f6b812b0f9ebc505"} Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.979212 4834 generic.go:334] "Generic (PLEG): container finished" podID="bb1c174a-a1ea-4c84-a0e3-5241055f2c28" containerID="4584afc588265a66c1cc48f6be939cf379f4c49c72c881e4d5864241a68a8ecd" exitCode=0 Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.979312 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-59fnb" event={"ID":"bb1c174a-a1ea-4c84-a0e3-5241055f2c28","Type":"ContainerDied","Data":"4584afc588265a66c1cc48f6be939cf379f4c49c72c881e4d5864241a68a8ecd"} Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.981565 4834 generic.go:334] "Generic (PLEG): container finished" podID="e7214b49-f9f3-4926-a1b5-43edd8ccf6ff" containerID="44a1ebe7b3d2a58ed234b940dcad849407babcc59811e4b7c8c1488e3e574a78" exitCode=0 Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.981630 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l5mkc" event={"ID":"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff","Type":"ContainerDied","Data":"44a1ebe7b3d2a58ed234b940dcad849407babcc59811e4b7c8c1488e3e574a78"} Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.992998 4834 generic.go:334] "Generic (PLEG): container finished" podID="5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b" containerID="9cdf46c68fa323acb850f14c8117071740959cc8e11f908c325e927865ffe66b" exitCode=0 Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.993116 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cv77m" event={"ID":"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b","Type":"ContainerDied","Data":"9cdf46c68fa323acb850f14c8117071740959cc8e11f908c325e927865ffe66b"} Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.993147 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cv77m" event={"ID":"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b","Type":"ContainerStarted","Data":"23d3f3fad6c7bdff43d05f94a29639f2e9852371ac89a3c34b21fe55f783b791"} Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.996073 4834 generic.go:334] "Generic (PLEG): container finished" podID="3a2958c1-85bd-45a4-962f-5af74b8b2896" containerID="26c58187ecc7217104ccff0eda5c2f6560785955665bcfa555a8b04789f332b8" exitCode=0 Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.996136 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6ed4-account-create-update-nplb4" event={"ID":"3a2958c1-85bd-45a4-962f-5af74b8b2896","Type":"ContainerDied","Data":"26c58187ecc7217104ccff0eda5c2f6560785955665bcfa555a8b04789f332b8"} Jan 21 14:50:30 crc kubenswrapper[4834]: I0121 14:50:30.996170 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6ed4-account-create-update-nplb4" event={"ID":"3a2958c1-85bd-45a4-962f-5af74b8b2896","Type":"ContainerStarted","Data":"4ecb437f010c17e37d40c0652543b53f206d9b9aeda6a9d61851a7ab3cf57696"} Jan 21 14:50:31 crc kubenswrapper[4834]: I0121 14:50:31.132202 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:31 crc kubenswrapper[4834]: E0121 14:50:31.132497 4834 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:50:31 crc kubenswrapper[4834]: E0121 14:50:31.132632 4834 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:50:31 crc kubenswrapper[4834]: E0121 14:50:31.132709 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift podName:835da3fd-0497-4072-9d76-122d19300787 nodeName:}" failed. No retries permitted until 2026-01-21 14:50:39.132688348 +0000 UTC m=+1185.107037393 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift") pod "swift-storage-0" (UID: "835da3fd-0497-4072-9d76-122d19300787") : configmap "swift-ring-files" not found Jan 21 14:50:31 crc kubenswrapper[4834]: I0121 14:50:31.584694 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 14:50:31 crc kubenswrapper[4834]: I0121 14:50:31.990135 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 14:50:32 crc kubenswrapper[4834]: I0121 14:50:32.359026 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:50:32 crc kubenswrapper[4834]: I0121 14:50:32.456159 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-p25vl"] Jan 21 14:50:32 crc kubenswrapper[4834]: I0121 14:50:32.456389 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7878659675-p25vl" podUID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerName="dnsmasq-dns" containerID="cri-o://72d073516b45715316a88246e07be967efa676f3ddeb2bea78d1200fd1e85031" gracePeriod=10 Jan 21 14:50:32 crc kubenswrapper[4834]: I0121 14:50:32.957671 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7878659675-p25vl" podUID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 21 14:50:33 crc kubenswrapper[4834]: I0121 14:50:33.015419 4834 generic.go:334] "Generic (PLEG): container finished" podID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerID="72d073516b45715316a88246e07be967efa676f3ddeb2bea78d1200fd1e85031" exitCode=0 Jan 21 14:50:33 crc kubenswrapper[4834]: I0121 14:50:33.015489 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-p25vl" event={"ID":"5b0ba3bb-e346-4168-8be9-bf9e70d13121","Type":"ContainerDied","Data":"72d073516b45715316a88246e07be967efa676f3ddeb2bea78d1200fd1e85031"} Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.523506 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-sw7v4"] Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.524715 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.534125 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sw7v4"] Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.592521 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692edde8-3448-4e1f-8996-b301c823e43d-operator-scripts\") pod \"glance-db-create-sw7v4\" (UID: \"692edde8-3448-4e1f-8996-b301c823e43d\") " pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.592638 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qpvq\" (UniqueName: \"kubernetes.io/projected/692edde8-3448-4e1f-8996-b301c823e43d-kube-api-access-7qpvq\") pod \"glance-db-create-sw7v4\" (UID: \"692edde8-3448-4e1f-8996-b301c823e43d\") " pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.694403 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692edde8-3448-4e1f-8996-b301c823e43d-operator-scripts\") pod \"glance-db-create-sw7v4\" (UID: \"692edde8-3448-4e1f-8996-b301c823e43d\") " pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.694546 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qpvq\" (UniqueName: \"kubernetes.io/projected/692edde8-3448-4e1f-8996-b301c823e43d-kube-api-access-7qpvq\") pod \"glance-db-create-sw7v4\" (UID: \"692edde8-3448-4e1f-8996-b301c823e43d\") " pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.695361 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692edde8-3448-4e1f-8996-b301c823e43d-operator-scripts\") pod \"glance-db-create-sw7v4\" (UID: \"692edde8-3448-4e1f-8996-b301c823e43d\") " pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.716841 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qpvq\" (UniqueName: \"kubernetes.io/projected/692edde8-3448-4e1f-8996-b301c823e43d-kube-api-access-7qpvq\") pod \"glance-db-create-sw7v4\" (UID: \"692edde8-3448-4e1f-8996-b301c823e43d\") " pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.847789 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.923271 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9ca3-account-create-update-57lcl"] Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.924467 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.927769 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 14:50:34 crc kubenswrapper[4834]: I0121 14:50:34.943066 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9ca3-account-create-update-57lcl"] Jan 21 14:50:35 crc kubenswrapper[4834]: I0121 14:50:35.102858 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwkd\" (UniqueName: \"kubernetes.io/projected/db280fdf-08e2-4c0b-bc56-535c8a85be1a-kube-api-access-vjwkd\") pod \"glance-9ca3-account-create-update-57lcl\" (UID: \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\") " pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:35 crc kubenswrapper[4834]: I0121 14:50:35.102983 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db280fdf-08e2-4c0b-bc56-535c8a85be1a-operator-scripts\") pod \"glance-9ca3-account-create-update-57lcl\" (UID: \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\") " pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:35 crc kubenswrapper[4834]: I0121 14:50:35.204318 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwkd\" (UniqueName: \"kubernetes.io/projected/db280fdf-08e2-4c0b-bc56-535c8a85be1a-kube-api-access-vjwkd\") pod \"glance-9ca3-account-create-update-57lcl\" (UID: \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\") " pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:35 crc kubenswrapper[4834]: I0121 14:50:35.204493 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db280fdf-08e2-4c0b-bc56-535c8a85be1a-operator-scripts\") pod \"glance-9ca3-account-create-update-57lcl\" (UID: \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\") " pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:35 crc kubenswrapper[4834]: I0121 14:50:35.205567 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db280fdf-08e2-4c0b-bc56-535c8a85be1a-operator-scripts\") pod \"glance-9ca3-account-create-update-57lcl\" (UID: \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\") " pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:35 crc kubenswrapper[4834]: I0121 14:50:35.222799 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwkd\" (UniqueName: \"kubernetes.io/projected/db280fdf-08e2-4c0b-bc56-535c8a85be1a-kube-api-access-vjwkd\") pod \"glance-9ca3-account-create-update-57lcl\" (UID: \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\") " pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:35 crc kubenswrapper[4834]: I0121 14:50:35.243387 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:36 crc kubenswrapper[4834]: I0121 14:50:36.632054 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:36 crc kubenswrapper[4834]: I0121 14:50:36.636460 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crwds\" (UniqueName: \"kubernetes.io/projected/3a2958c1-85bd-45a4-962f-5af74b8b2896-kube-api-access-crwds\") pod \"3a2958c1-85bd-45a4-962f-5af74b8b2896\" (UID: \"3a2958c1-85bd-45a4-962f-5af74b8b2896\") " Jan 21 14:50:36 crc kubenswrapper[4834]: I0121 14:50:36.636628 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a2958c1-85bd-45a4-962f-5af74b8b2896-operator-scripts\") pod \"3a2958c1-85bd-45a4-962f-5af74b8b2896\" (UID: \"3a2958c1-85bd-45a4-962f-5af74b8b2896\") " Jan 21 14:50:36 crc kubenswrapper[4834]: I0121 14:50:36.637725 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a2958c1-85bd-45a4-962f-5af74b8b2896-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a2958c1-85bd-45a4-962f-5af74b8b2896" (UID: "3a2958c1-85bd-45a4-962f-5af74b8b2896"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:36 crc kubenswrapper[4834]: I0121 14:50:36.643610 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a2958c1-85bd-45a4-962f-5af74b8b2896-kube-api-access-crwds" (OuterVolumeSpecName: "kube-api-access-crwds") pod "3a2958c1-85bd-45a4-962f-5af74b8b2896" (UID: "3a2958c1-85bd-45a4-962f-5af74b8b2896"). InnerVolumeSpecName "kube-api-access-crwds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:36 crc kubenswrapper[4834]: I0121 14:50:36.738604 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a2958c1-85bd-45a4-962f-5af74b8b2896-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:36 crc kubenswrapper[4834]: I0121 14:50:36.738655 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crwds\" (UniqueName: \"kubernetes.io/projected/3a2958c1-85bd-45a4-962f-5af74b8b2896-kube-api-access-crwds\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:37 crc kubenswrapper[4834]: I0121 14:50:37.052429 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6ed4-account-create-update-nplb4" event={"ID":"3a2958c1-85bd-45a4-962f-5af74b8b2896","Type":"ContainerDied","Data":"4ecb437f010c17e37d40c0652543b53f206d9b9aeda6a9d61851a7ab3cf57696"} Jan 21 14:50:37 crc kubenswrapper[4834]: I0121 14:50:37.052848 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ecb437f010c17e37d40c0652543b53f206d9b9aeda6a9d61851a7ab3cf57696" Jan 21 14:50:37 crc kubenswrapper[4834]: I0121 14:50:37.052483 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6ed4-account-create-update-nplb4" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.067893 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-23dc-account-create-update-w6z88" event={"ID":"a5f1661a-e972-4a56-bf7c-75e6f605a4c9","Type":"ContainerDied","Data":"410fc619ccef6e448b7f745a48016d7354709722d316fe0b5a3adaa9d84f002b"} Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.068489 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="410fc619ccef6e448b7f745a48016d7354709722d316fe0b5a3adaa9d84f002b" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.070701 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-59fnb" event={"ID":"bb1c174a-a1ea-4c84-a0e3-5241055f2c28","Type":"ContainerDied","Data":"6dac7db3a22c0dd39fe8aee22772d06de770d0055697522686a6631a356520eb"} Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.070734 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dac7db3a22c0dd39fe8aee22772d06de770d0055697522686a6631a356520eb" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.072138 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l5mkc" event={"ID":"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff","Type":"ContainerDied","Data":"d1b97bc8f2f664d864e3d18a40d7f873a47420745e6d8e43db476eeef320f82b"} Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.072162 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1b97bc8f2f664d864e3d18a40d7f873a47420745e6d8e43db476eeef320f82b" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.073267 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cv77m" event={"ID":"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b","Type":"ContainerDied","Data":"23d3f3fad6c7bdff43d05f94a29639f2e9852371ac89a3c34b21fe55f783b791"} Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.073284 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23d3f3fad6c7bdff43d05f94a29639f2e9852371ac89a3c34b21fe55f783b791" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.201992 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cv77m" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.226942 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.244764 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.254720 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.266504 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.369614 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-config\") pod \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370004 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-operator-scripts\") pod \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\" (UID: \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370062 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbkq7\" (UniqueName: \"kubernetes.io/projected/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-kube-api-access-gbkq7\") pod \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\" (UID: \"5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370094 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gqcn\" (UniqueName: \"kubernetes.io/projected/5b0ba3bb-e346-4168-8be9-bf9e70d13121-kube-api-access-7gqcn\") pod \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370119 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-operator-scripts\") pod \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\" (UID: \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370366 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7q7h\" (UniqueName: \"kubernetes.io/projected/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-kube-api-access-x7q7h\") pod \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\" (UID: \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370663 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-ovsdbserver-nb\") pod \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370754 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2wtm\" (UniqueName: \"kubernetes.io/projected/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-kube-api-access-k2wtm\") pod \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\" (UID: \"bb1c174a-a1ea-4c84-a0e3-5241055f2c28\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370804 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-dns-svc\") pod \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\" (UID: \"5b0ba3bb-e346-4168-8be9-bf9e70d13121\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370847 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-operator-scripts\") pod \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\" (UID: \"a5f1661a-e972-4a56-bf7c-75e6f605a4c9\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.370898 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6666n\" (UniqueName: \"kubernetes.io/projected/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-kube-api-access-6666n\") pod \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\" (UID: \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.371129 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-operator-scripts\") pod \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\" (UID: \"e7214b49-f9f3-4926-a1b5-43edd8ccf6ff\") " Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.371304 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b" (UID: "5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.372029 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb1c174a-a1ea-4c84-a0e3-5241055f2c28" (UID: "bb1c174a-a1ea-4c84-a0e3-5241055f2c28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.372535 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.372575 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.374069 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5f1661a-e972-4a56-bf7c-75e6f605a4c9" (UID: "a5f1661a-e972-4a56-bf7c-75e6f605a4c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.376484 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-kube-api-access-x7q7h" (OuterVolumeSpecName: "kube-api-access-x7q7h") pod "a5f1661a-e972-4a56-bf7c-75e6f605a4c9" (UID: "a5f1661a-e972-4a56-bf7c-75e6f605a4c9"). InnerVolumeSpecName "kube-api-access-x7q7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.377255 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-kube-api-access-k2wtm" (OuterVolumeSpecName: "kube-api-access-k2wtm") pod "bb1c174a-a1ea-4c84-a0e3-5241055f2c28" (UID: "bb1c174a-a1ea-4c84-a0e3-5241055f2c28"). InnerVolumeSpecName "kube-api-access-k2wtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.377370 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0ba3bb-e346-4168-8be9-bf9e70d13121-kube-api-access-7gqcn" (OuterVolumeSpecName: "kube-api-access-7gqcn") pod "5b0ba3bb-e346-4168-8be9-bf9e70d13121" (UID: "5b0ba3bb-e346-4168-8be9-bf9e70d13121"). InnerVolumeSpecName "kube-api-access-7gqcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.377485 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-kube-api-access-gbkq7" (OuterVolumeSpecName: "kube-api-access-gbkq7") pod "5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b" (UID: "5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b"). InnerVolumeSpecName "kube-api-access-gbkq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.382146 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-kube-api-access-6666n" (OuterVolumeSpecName: "kube-api-access-6666n") pod "e7214b49-f9f3-4926-a1b5-43edd8ccf6ff" (UID: "e7214b49-f9f3-4926-a1b5-43edd8ccf6ff"). InnerVolumeSpecName "kube-api-access-6666n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.393679 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7214b49-f9f3-4926-a1b5-43edd8ccf6ff" (UID: "e7214b49-f9f3-4926-a1b5-43edd8ccf6ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.427561 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b0ba3bb-e346-4168-8be9-bf9e70d13121" (UID: "5b0ba3bb-e346-4168-8be9-bf9e70d13121"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.428552 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b0ba3bb-e346-4168-8be9-bf9e70d13121" (UID: "5b0ba3bb-e346-4168-8be9-bf9e70d13121"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.439017 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-config" (OuterVolumeSpecName: "config") pod "5b0ba3bb-e346-4168-8be9-bf9e70d13121" (UID: "5b0ba3bb-e346-4168-8be9-bf9e70d13121"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475751 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475793 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2wtm\" (UniqueName: \"kubernetes.io/projected/bb1c174a-a1ea-4c84-a0e3-5241055f2c28-kube-api-access-k2wtm\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475807 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475816 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475825 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6666n\" (UniqueName: \"kubernetes.io/projected/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-kube-api-access-6666n\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475834 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475842 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b0ba3bb-e346-4168-8be9-bf9e70d13121-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475852 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbkq7\" (UniqueName: \"kubernetes.io/projected/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b-kube-api-access-gbkq7\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475860 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gqcn\" (UniqueName: \"kubernetes.io/projected/5b0ba3bb-e346-4168-8be9-bf9e70d13121-kube-api-access-7gqcn\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.475870 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7q7h\" (UniqueName: \"kubernetes.io/projected/a5f1661a-e972-4a56-bf7c-75e6f605a4c9-kube-api-access-x7q7h\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:38 crc kubenswrapper[4834]: I0121 14:50:38.571938 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9ca3-account-create-update-57lcl"] Jan 21 14:50:38 crc kubenswrapper[4834]: W0121 14:50:38.578017 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb280fdf_08e2_4c0b_bc56_535c8a85be1a.slice/crio-784b3cbcb42379308623f879f230b6036627eb956f4a996f204649aac59bc3fc WatchSource:0}: Error finding container 784b3cbcb42379308623f879f230b6036627eb956f4a996f204649aac59bc3fc: Status 404 returned error can't find the container with id 784b3cbcb42379308623f879f230b6036627eb956f4a996f204649aac59bc3fc Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:38.688124 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sw7v4"] Jan 21 14:50:42 crc kubenswrapper[4834]: W0121 14:50:38.695810 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod692edde8_3448_4e1f_8996_b301c823e43d.slice/crio-20fd2361cbff2ea59743f2c121702d8c2c0d2637c435d79226783db96df7f198 WatchSource:0}: Error finding container 20fd2361cbff2ea59743f2c121702d8c2c0d2637c435d79226783db96df7f198: Status 404 returned error can't find the container with id 20fd2361cbff2ea59743f2c121702d8c2c0d2637c435d79226783db96df7f198 Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.092797 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sw7v4" event={"ID":"692edde8-3448-4e1f-8996-b301c823e43d","Type":"ContainerStarted","Data":"eec097a377919f1aac1f1e4555fbca25a4859225cb53121ee4397bb159e45c71"} Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.093305 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sw7v4" event={"ID":"692edde8-3448-4e1f-8996-b301c823e43d","Type":"ContainerStarted","Data":"20fd2361cbff2ea59743f2c121702d8c2c0d2637c435d79226783db96df7f198"} Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.100665 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4nq5q" event={"ID":"e4069528-b187-472b-a3b0-fa87693b4626","Type":"ContainerStarted","Data":"91ae6eb26515c13884731bd4fedbd55169de35554d0a031e75ea1836e5893a76"} Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.105356 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9ca3-account-create-update-57lcl" event={"ID":"db280fdf-08e2-4c0b-bc56-535c8a85be1a","Type":"ContainerStarted","Data":"a11fb495757824d138ac77a3fef9d20b66153b3238dbfc25af87a935ad9bf9f6"} Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.105395 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9ca3-account-create-update-57lcl" event={"ID":"db280fdf-08e2-4c0b-bc56-535c8a85be1a","Type":"ContainerStarted","Data":"784b3cbcb42379308623f879f230b6036627eb956f4a996f204649aac59bc3fc"} Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.107309 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cv77m" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.109070 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-p25vl" event={"ID":"5b0ba3bb-e346-4168-8be9-bf9e70d13121","Type":"ContainerDied","Data":"3f5d5c01dcca833e8d0e03659421d020989bbbf7a1d286b34886afd771849038"} Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.109161 4834 scope.go:117] "RemoveContainer" containerID="72d073516b45715316a88246e07be967efa676f3ddeb2bea78d1200fd1e85031" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.109256 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l5mkc" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.109273 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-23dc-account-create-update-w6z88" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.109325 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-p25vl" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.125609 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-59fnb" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.156122 4834 scope.go:117] "RemoveContainer" containerID="9e9ee4c0dcff9375647a5f8eea0138a24498408929d944c90862a27e60ce0d5c" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.171515 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-sw7v4" podStartSLOduration=5.171491721 podStartE2EDuration="5.171491721s" podCreationTimestamp="2026-01-21 14:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:50:39.127381359 +0000 UTC m=+1185.101730414" watchObservedRunningTime="2026-01-21 14:50:39.171491721 +0000 UTC m=+1185.145840766" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.174100 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4nq5q" podStartSLOduration=2.871899708 podStartE2EDuration="12.174087422s" podCreationTimestamp="2026-01-21 14:50:27 +0000 UTC" firstStartedPulling="2026-01-21 14:50:29.139271509 +0000 UTC m=+1175.113620554" lastFinishedPulling="2026-01-21 14:50:38.441459223 +0000 UTC m=+1184.415808268" observedRunningTime="2026-01-21 14:50:39.170144569 +0000 UTC m=+1185.144493624" watchObservedRunningTime="2026-01-21 14:50:39.174087422 +0000 UTC m=+1185.148436467" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.188118 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:39.188381 4834 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:39.188423 4834 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:39.188489 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift podName:835da3fd-0497-4072-9d76-122d19300787 nodeName:}" failed. No retries permitted until 2026-01-21 14:50:55.188467288 +0000 UTC m=+1201.162816333 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift") pod "swift-storage-0" (UID: "835da3fd-0497-4072-9d76-122d19300787") : configmap "swift-ring-files" not found Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.245051 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-p25vl"] Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:39.266542 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7878659675-p25vl"] Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:40.134870 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-9ca3-account-create-update-57lcl" podStartSLOduration=6.134849863 podStartE2EDuration="6.134849863s" podCreationTimestamp="2026-01-21 14:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:50:40.129044123 +0000 UTC m=+1186.103393168" watchObservedRunningTime="2026-01-21 14:50:40.134849863 +0000 UTC m=+1186.109198908" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:40.293025 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:50:42 crc kubenswrapper[4834]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:50:42 crc kubenswrapper[4834]: > Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:40.333959 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" path="/var/lib/kubelet/pods/5b0ba3bb-e346-4168-8be9-bf9e70d13121/volumes" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:40.860019 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l5mkc"] Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:40.866407 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l5mkc"] Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.335639 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7214b49-f9f3-4926-a1b5-43edd8ccf6ff" path="/var/lib/kubelet/pods/e7214b49-f9f3-4926-a1b5-43edd8ccf6ff/volumes" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.545704 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cgrp9"] Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:42.546336 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7214b49-f9f3-4926-a1b5-43edd8ccf6ff" containerName="mariadb-account-create-update" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546368 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7214b49-f9f3-4926-a1b5-43edd8ccf6ff" containerName="mariadb-account-create-update" Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:42.546395 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b" containerName="mariadb-database-create" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546409 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b" containerName="mariadb-database-create" Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:42.546447 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2958c1-85bd-45a4-962f-5af74b8b2896" containerName="mariadb-account-create-update" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546500 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2958c1-85bd-45a4-962f-5af74b8b2896" containerName="mariadb-account-create-update" Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:42.546515 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f1661a-e972-4a56-bf7c-75e6f605a4c9" containerName="mariadb-account-create-update" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546528 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f1661a-e972-4a56-bf7c-75e6f605a4c9" containerName="mariadb-account-create-update" Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:42.546547 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerName="init" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546558 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerName="init" Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:42.546571 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb1c174a-a1ea-4c84-a0e3-5241055f2c28" containerName="mariadb-database-create" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546582 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb1c174a-a1ea-4c84-a0e3-5241055f2c28" containerName="mariadb-database-create" Jan 21 14:50:42 crc kubenswrapper[4834]: E0121 14:50:42.546609 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerName="dnsmasq-dns" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546620 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerName="dnsmasq-dns" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546906 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb1c174a-a1ea-4c84-a0e3-5241055f2c28" containerName="mariadb-database-create" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546952 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7214b49-f9f3-4926-a1b5-43edd8ccf6ff" containerName="mariadb-account-create-update" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546978 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b" containerName="mariadb-database-create" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.546993 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerName="dnsmasq-dns" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.547013 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2958c1-85bd-45a4-962f-5af74b8b2896" containerName="mariadb-account-create-update" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.547028 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f1661a-e972-4a56-bf7c-75e6f605a4c9" containerName="mariadb-account-create-update" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.547912 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.550736 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.552804 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cgrp9"] Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.650249 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-operator-scripts\") pod \"root-account-create-update-cgrp9\" (UID: \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\") " pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.650523 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgg2r\" (UniqueName: \"kubernetes.io/projected/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-kube-api-access-zgg2r\") pod \"root-account-create-update-cgrp9\" (UID: \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\") " pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.752481 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgg2r\" (UniqueName: \"kubernetes.io/projected/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-kube-api-access-zgg2r\") pod \"root-account-create-update-cgrp9\" (UID: \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\") " pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.753136 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-operator-scripts\") pod \"root-account-create-update-cgrp9\" (UID: \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\") " pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.753789 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-operator-scripts\") pod \"root-account-create-update-cgrp9\" (UID: \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\") " pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.773814 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgg2r\" (UniqueName: \"kubernetes.io/projected/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-kube-api-access-zgg2r\") pod \"root-account-create-update-cgrp9\" (UID: \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\") " pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.867150 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:42 crc kubenswrapper[4834]: I0121 14:50:42.959235 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7878659675-p25vl" podUID="5b0ba3bb-e346-4168-8be9-bf9e70d13121" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 21 14:50:43 crc kubenswrapper[4834]: I0121 14:50:43.158678 4834 generic.go:334] "Generic (PLEG): container finished" podID="692edde8-3448-4e1f-8996-b301c823e43d" containerID="eec097a377919f1aac1f1e4555fbca25a4859225cb53121ee4397bb159e45c71" exitCode=0 Jan 21 14:50:43 crc kubenswrapper[4834]: I0121 14:50:43.159014 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sw7v4" event={"ID":"692edde8-3448-4e1f-8996-b301c823e43d","Type":"ContainerDied","Data":"eec097a377919f1aac1f1e4555fbca25a4859225cb53121ee4397bb159e45c71"} Jan 21 14:50:43 crc kubenswrapper[4834]: I0121 14:50:43.165202 4834 generic.go:334] "Generic (PLEG): container finished" podID="db280fdf-08e2-4c0b-bc56-535c8a85be1a" containerID="a11fb495757824d138ac77a3fef9d20b66153b3238dbfc25af87a935ad9bf9f6" exitCode=0 Jan 21 14:50:43 crc kubenswrapper[4834]: I0121 14:50:43.165253 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9ca3-account-create-update-57lcl" event={"ID":"db280fdf-08e2-4c0b-bc56-535c8a85be1a","Type":"ContainerDied","Data":"a11fb495757824d138ac77a3fef9d20b66153b3238dbfc25af87a935ad9bf9f6"} Jan 21 14:50:43 crc kubenswrapper[4834]: I0121 14:50:43.262223 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cgrp9"] Jan 21 14:50:43 crc kubenswrapper[4834]: W0121 14:50:43.268815 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc0c49ba_9fd9_4d95_bdb8_819212e5efdd.slice/crio-a916226a12a8a29fed8a1a4393fcd37be9c3e09a6bebc71ba883d227348107ee WatchSource:0}: Error finding container a916226a12a8a29fed8a1a4393fcd37be9c3e09a6bebc71ba883d227348107ee: Status 404 returned error can't find the container with id a916226a12a8a29fed8a1a4393fcd37be9c3e09a6bebc71ba883d227348107ee Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.173732 4834 generic.go:334] "Generic (PLEG): container finished" podID="cc0c49ba-9fd9-4d95-bdb8-819212e5efdd" containerID="e7d2fdd8aa6d58481bcc56fd724bf04ab9b6133ef98b9686261c001278873999" exitCode=0 Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.173791 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cgrp9" event={"ID":"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd","Type":"ContainerDied","Data":"e7d2fdd8aa6d58481bcc56fd724bf04ab9b6133ef98b9686261c001278873999"} Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.174134 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cgrp9" event={"ID":"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd","Type":"ContainerStarted","Data":"a916226a12a8a29fed8a1a4393fcd37be9c3e09a6bebc71ba883d227348107ee"} Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.650837 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.667644 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.697449 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db280fdf-08e2-4c0b-bc56-535c8a85be1a-operator-scripts\") pod \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\" (UID: \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\") " Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.697525 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692edde8-3448-4e1f-8996-b301c823e43d-operator-scripts\") pod \"692edde8-3448-4e1f-8996-b301c823e43d\" (UID: \"692edde8-3448-4e1f-8996-b301c823e43d\") " Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.697578 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qpvq\" (UniqueName: \"kubernetes.io/projected/692edde8-3448-4e1f-8996-b301c823e43d-kube-api-access-7qpvq\") pod \"692edde8-3448-4e1f-8996-b301c823e43d\" (UID: \"692edde8-3448-4e1f-8996-b301c823e43d\") " Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.697645 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjwkd\" (UniqueName: \"kubernetes.io/projected/db280fdf-08e2-4c0b-bc56-535c8a85be1a-kube-api-access-vjwkd\") pod \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\" (UID: \"db280fdf-08e2-4c0b-bc56-535c8a85be1a\") " Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.704690 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/692edde8-3448-4e1f-8996-b301c823e43d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "692edde8-3448-4e1f-8996-b301c823e43d" (UID: "692edde8-3448-4e1f-8996-b301c823e43d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.704743 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db280fdf-08e2-4c0b-bc56-535c8a85be1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db280fdf-08e2-4c0b-bc56-535c8a85be1a" (UID: "db280fdf-08e2-4c0b-bc56-535c8a85be1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.708171 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db280fdf-08e2-4c0b-bc56-535c8a85be1a-kube-api-access-vjwkd" (OuterVolumeSpecName: "kube-api-access-vjwkd") pod "db280fdf-08e2-4c0b-bc56-535c8a85be1a" (UID: "db280fdf-08e2-4c0b-bc56-535c8a85be1a"). InnerVolumeSpecName "kube-api-access-vjwkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.709189 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692edde8-3448-4e1f-8996-b301c823e43d-kube-api-access-7qpvq" (OuterVolumeSpecName: "kube-api-access-7qpvq") pod "692edde8-3448-4e1f-8996-b301c823e43d" (UID: "692edde8-3448-4e1f-8996-b301c823e43d"). InnerVolumeSpecName "kube-api-access-7qpvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.800382 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qpvq\" (UniqueName: \"kubernetes.io/projected/692edde8-3448-4e1f-8996-b301c823e43d-kube-api-access-7qpvq\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.800416 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjwkd\" (UniqueName: \"kubernetes.io/projected/db280fdf-08e2-4c0b-bc56-535c8a85be1a-kube-api-access-vjwkd\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.800426 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db280fdf-08e2-4c0b-bc56-535c8a85be1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:44 crc kubenswrapper[4834]: I0121 14:50:44.800436 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692edde8-3448-4e1f-8996-b301c823e43d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.202366 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9ca3-account-create-update-57lcl" Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.202407 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9ca3-account-create-update-57lcl" event={"ID":"db280fdf-08e2-4c0b-bc56-535c8a85be1a","Type":"ContainerDied","Data":"784b3cbcb42379308623f879f230b6036627eb956f4a996f204649aac59bc3fc"} Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.202466 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="784b3cbcb42379308623f879f230b6036627eb956f4a996f204649aac59bc3fc" Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.204007 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sw7v4" event={"ID":"692edde8-3448-4e1f-8996-b301c823e43d","Type":"ContainerDied","Data":"20fd2361cbff2ea59743f2c121702d8c2c0d2637c435d79226783db96df7f198"} Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.204040 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fd2361cbff2ea59743f2c121702d8c2c0d2637c435d79226783db96df7f198" Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.204080 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sw7v4" Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.296044 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:50:45 crc kubenswrapper[4834]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:50:45 crc kubenswrapper[4834]: > Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.543395 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.616450 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgg2r\" (UniqueName: \"kubernetes.io/projected/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-kube-api-access-zgg2r\") pod \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\" (UID: \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\") " Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.616920 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-operator-scripts\") pod \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\" (UID: \"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd\") " Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.617700 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc0c49ba-9fd9-4d95-bdb8-819212e5efdd" (UID: "cc0c49ba-9fd9-4d95-bdb8-819212e5efdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.622796 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-kube-api-access-zgg2r" (OuterVolumeSpecName: "kube-api-access-zgg2r") pod "cc0c49ba-9fd9-4d95-bdb8-819212e5efdd" (UID: "cc0c49ba-9fd9-4d95-bdb8-819212e5efdd"). InnerVolumeSpecName "kube-api-access-zgg2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.718609 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:45 crc kubenswrapper[4834]: I0121 14:50:45.718649 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgg2r\" (UniqueName: \"kubernetes.io/projected/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd-kube-api-access-zgg2r\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:46 crc kubenswrapper[4834]: I0121 14:50:46.212687 4834 generic.go:334] "Generic (PLEG): container finished" podID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" containerID="a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c" exitCode=0 Jan 21 14:50:46 crc kubenswrapper[4834]: I0121 14:50:46.213091 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b87b73b4-2715-4ce7-81b3-df0c1f57922f","Type":"ContainerDied","Data":"a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c"} Jan 21 14:50:46 crc kubenswrapper[4834]: I0121 14:50:46.214213 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cgrp9" event={"ID":"cc0c49ba-9fd9-4d95-bdb8-819212e5efdd","Type":"ContainerDied","Data":"a916226a12a8a29fed8a1a4393fcd37be9c3e09a6bebc71ba883d227348107ee"} Jan 21 14:50:46 crc kubenswrapper[4834]: I0121 14:50:46.214335 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a916226a12a8a29fed8a1a4393fcd37be9c3e09a6bebc71ba883d227348107ee" Jan 21 14:50:46 crc kubenswrapper[4834]: I0121 14:50:46.214511 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cgrp9" Jan 21 14:50:47 crc kubenswrapper[4834]: I0121 14:50:47.222770 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b87b73b4-2715-4ce7-81b3-df0c1f57922f","Type":"ContainerStarted","Data":"74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf"} Jan 21 14:50:47 crc kubenswrapper[4834]: I0121 14:50:47.224247 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:47 crc kubenswrapper[4834]: I0121 14:50:47.266517 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.921540441 podStartE2EDuration="1m23.266493991s" podCreationTimestamp="2026-01-21 14:49:24 +0000 UTC" firstStartedPulling="2026-01-21 14:49:27.466556405 +0000 UTC m=+1113.440905450" lastFinishedPulling="2026-01-21 14:50:11.811509955 +0000 UTC m=+1157.785859000" observedRunningTime="2026-01-21 14:50:47.258374799 +0000 UTC m=+1193.232723854" watchObservedRunningTime="2026-01-21 14:50:47.266493991 +0000 UTC m=+1193.240843036" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.240282 4834 generic.go:334] "Generic (PLEG): container finished" podID="e4069528-b187-472b-a3b0-fa87693b4626" containerID="91ae6eb26515c13884731bd4fedbd55169de35554d0a031e75ea1836e5893a76" exitCode=0 Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.240383 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4nq5q" event={"ID":"e4069528-b187-472b-a3b0-fa87693b4626","Type":"ContainerDied","Data":"91ae6eb26515c13884731bd4fedbd55169de35554d0a031e75ea1836e5893a76"} Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.831654 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rkw29"] Jan 21 14:50:49 crc kubenswrapper[4834]: E0121 14:50:49.832135 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db280fdf-08e2-4c0b-bc56-535c8a85be1a" containerName="mariadb-account-create-update" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.832158 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="db280fdf-08e2-4c0b-bc56-535c8a85be1a" containerName="mariadb-account-create-update" Jan 21 14:50:49 crc kubenswrapper[4834]: E0121 14:50:49.832179 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692edde8-3448-4e1f-8996-b301c823e43d" containerName="mariadb-database-create" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.832190 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="692edde8-3448-4e1f-8996-b301c823e43d" containerName="mariadb-database-create" Jan 21 14:50:49 crc kubenswrapper[4834]: E0121 14:50:49.832218 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0c49ba-9fd9-4d95-bdb8-819212e5efdd" containerName="mariadb-account-create-update" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.832228 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0c49ba-9fd9-4d95-bdb8-819212e5efdd" containerName="mariadb-account-create-update" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.832461 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="db280fdf-08e2-4c0b-bc56-535c8a85be1a" containerName="mariadb-account-create-update" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.832485 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="692edde8-3448-4e1f-8996-b301c823e43d" containerName="mariadb-database-create" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.832498 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0c49ba-9fd9-4d95-bdb8-819212e5efdd" containerName="mariadb-account-create-update" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.833201 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.838067 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.838084 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z26b8" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.865001 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rkw29"] Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.898736 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-config-data\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.898798 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-db-sync-config-data\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.898861 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62r85\" (UniqueName: \"kubernetes.io/projected/b5abd5d5-addd-4b84-a301-86a55a7e23cf-kube-api-access-62r85\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:49 crc kubenswrapper[4834]: I0121 14:50:49.898905 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-combined-ca-bundle\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.001386 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-config-data\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.001476 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-db-sync-config-data\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.001524 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62r85\" (UniqueName: \"kubernetes.io/projected/b5abd5d5-addd-4b84-a301-86a55a7e23cf-kube-api-access-62r85\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.001558 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-combined-ca-bundle\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.008553 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-config-data\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.008775 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-combined-ca-bundle\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.008879 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-db-sync-config-data\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.021169 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62r85\" (UniqueName: \"kubernetes.io/projected/b5abd5d5-addd-4b84-a301-86a55a7e23cf-kube-api-access-62r85\") pod \"glance-db-sync-rkw29\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.155911 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rkw29" Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.482450 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:50:50 crc kubenswrapper[4834]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:50:50 crc kubenswrapper[4834]: > Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.947347 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cgrp9"] Jan 21 14:50:50 crc kubenswrapper[4834]: I0121 14:50:50.956332 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cgrp9"] Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.005868 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.022494 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rkw29"] Jan 21 14:50:51 crc kubenswrapper[4834]: W0121 14:50:51.027851 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5abd5d5_addd_4b84_a301_86a55a7e23cf.slice/crio-3350c7bf44345179ff391c5ffc1ba6608bf609a04e4af73a3f8f2d61468c72f5 WatchSource:0}: Error finding container 3350c7bf44345179ff391c5ffc1ba6608bf609a04e4af73a3f8f2d61468c72f5: Status 404 returned error can't find the container with id 3350c7bf44345179ff391c5ffc1ba6608bf609a04e4af73a3f8f2d61468c72f5 Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.124196 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-ring-data-devices\") pod \"e4069528-b187-472b-a3b0-fa87693b4626\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.124261 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-combined-ca-bundle\") pod \"e4069528-b187-472b-a3b0-fa87693b4626\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.124331 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-dispersionconf\") pod \"e4069528-b187-472b-a3b0-fa87693b4626\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.125216 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e4069528-b187-472b-a3b0-fa87693b4626" (UID: "e4069528-b187-472b-a3b0-fa87693b4626"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.125484 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-swiftconf\") pod \"e4069528-b187-472b-a3b0-fa87693b4626\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.125523 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-scripts\") pod \"e4069528-b187-472b-a3b0-fa87693b4626\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.125559 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e4069528-b187-472b-a3b0-fa87693b4626-etc-swift\") pod \"e4069528-b187-472b-a3b0-fa87693b4626\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.125821 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7csz\" (UniqueName: \"kubernetes.io/projected/e4069528-b187-472b-a3b0-fa87693b4626-kube-api-access-v7csz\") pod \"e4069528-b187-472b-a3b0-fa87693b4626\" (UID: \"e4069528-b187-472b-a3b0-fa87693b4626\") " Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.126300 4834 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.126638 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4069528-b187-472b-a3b0-fa87693b4626-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e4069528-b187-472b-a3b0-fa87693b4626" (UID: "e4069528-b187-472b-a3b0-fa87693b4626"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.131677 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4069528-b187-472b-a3b0-fa87693b4626-kube-api-access-v7csz" (OuterVolumeSpecName: "kube-api-access-v7csz") pod "e4069528-b187-472b-a3b0-fa87693b4626" (UID: "e4069528-b187-472b-a3b0-fa87693b4626"). InnerVolumeSpecName "kube-api-access-v7csz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.134355 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e4069528-b187-472b-a3b0-fa87693b4626" (UID: "e4069528-b187-472b-a3b0-fa87693b4626"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.150141 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e4069528-b187-472b-a3b0-fa87693b4626" (UID: "e4069528-b187-472b-a3b0-fa87693b4626"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.151375 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4069528-b187-472b-a3b0-fa87693b4626" (UID: "e4069528-b187-472b-a3b0-fa87693b4626"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.157010 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-scripts" (OuterVolumeSpecName: "scripts") pod "e4069528-b187-472b-a3b0-fa87693b4626" (UID: "e4069528-b187-472b-a3b0-fa87693b4626"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.228500 4834 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e4069528-b187-472b-a3b0-fa87693b4626-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.228543 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7csz\" (UniqueName: \"kubernetes.io/projected/e4069528-b187-472b-a3b0-fa87693b4626-kube-api-access-v7csz\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.228559 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.228571 4834 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.228586 4834 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e4069528-b187-472b-a3b0-fa87693b4626-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.228598 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4069528-b187-472b-a3b0-fa87693b4626-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.261980 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rkw29" event={"ID":"b5abd5d5-addd-4b84-a301-86a55a7e23cf","Type":"ContainerStarted","Data":"3350c7bf44345179ff391c5ffc1ba6608bf609a04e4af73a3f8f2d61468c72f5"} Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.264187 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4nq5q" event={"ID":"e4069528-b187-472b-a3b0-fa87693b4626","Type":"ContainerDied","Data":"6d10c6df35d9b022b50620a842c56c8d50c817c67fff88948ab54e2597408804"} Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.264222 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d10c6df35d9b022b50620a842c56c8d50c817c67fff88948ab54e2597408804" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.264278 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4nq5q" Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.274687 4834 generic.go:334] "Generic (PLEG): container finished" podID="df9714a2-fadf-48a3-8b71-07d7419cc713" containerID="8018ed8fca11c93bbc50ad4d89fee33fca796f9f20da5b7258ee155a5c1edde0" exitCode=0 Jan 21 14:50:51 crc kubenswrapper[4834]: I0121 14:50:51.274757 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df9714a2-fadf-48a3-8b71-07d7419cc713","Type":"ContainerDied","Data":"8018ed8fca11c93bbc50ad4d89fee33fca796f9f20da5b7258ee155a5c1edde0"} Jan 21 14:50:52 crc kubenswrapper[4834]: I0121 14:50:52.294187 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df9714a2-fadf-48a3-8b71-07d7419cc713","Type":"ContainerStarted","Data":"41b8202e62174a8eda17f1a9b9dd2a9295f09268d93892ad31cfad9446e70c71"} Jan 21 14:50:52 crc kubenswrapper[4834]: I0121 14:50:52.295399 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 14:50:52 crc kubenswrapper[4834]: I0121 14:50:52.338267 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0c49ba-9fd9-4d95-bdb8-819212e5efdd" path="/var/lib/kubelet/pods/cc0c49ba-9fd9-4d95-bdb8-819212e5efdd/volumes" Jan 21 14:50:54 crc kubenswrapper[4834]: I0121 14:50:54.365083 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371945.489714 podStartE2EDuration="1m31.36506304s" podCreationTimestamp="2026-01-21 14:49:23 +0000 UTC" firstStartedPulling="2026-01-21 14:49:26.086042186 +0000 UTC m=+1112.060391231" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:50:52.333911278 +0000 UTC m=+1198.308260323" watchObservedRunningTime="2026-01-21 14:50:54.36506304 +0000 UTC m=+1200.339412085" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.208370 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.217388 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"swift-storage-0\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " pod="openstack/swift-storage-0" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.280805 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:50:55 crc kubenswrapper[4834]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:50:55 crc kubenswrapper[4834]: > Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.303351 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.655399 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.890682 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mjhbn"] Jan 21 14:50:55 crc kubenswrapper[4834]: E0121 14:50:55.891260 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4069528-b187-472b-a3b0-fa87693b4626" containerName="swift-ring-rebalance" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.891284 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4069528-b187-472b-a3b0-fa87693b4626" containerName="swift-ring-rebalance" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.891625 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4069528-b187-472b-a3b0-fa87693b4626" containerName="swift-ring-rebalance" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.899647 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mjhbn" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.903380 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.908951 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mjhbn"] Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.927412 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ktp7\" (UniqueName: \"kubernetes.io/projected/57c461d8-2b64-4737-b0df-da4bddde822d-kube-api-access-5ktp7\") pod \"root-account-create-update-mjhbn\" (UID: \"57c461d8-2b64-4737-b0df-da4bddde822d\") " pod="openstack/root-account-create-update-mjhbn" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.927494 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c461d8-2b64-4737-b0df-da4bddde822d-operator-scripts\") pod \"root-account-create-update-mjhbn\" (UID: \"57c461d8-2b64-4737-b0df-da4bddde822d\") " pod="openstack/root-account-create-update-mjhbn" Jan 21 14:50:55 crc kubenswrapper[4834]: I0121 14:50:55.929974 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:50:55 crc kubenswrapper[4834]: W0121 14:50:55.935329 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod835da3fd_0497_4072_9d76_122d19300787.slice/crio-e366ab495ac8bbb23d901159b4caa451f94ffa5685dde04cea167f4e96d7b1d8 WatchSource:0}: Error finding container e366ab495ac8bbb23d901159b4caa451f94ffa5685dde04cea167f4e96d7b1d8: Status 404 returned error can't find the container with id e366ab495ac8bbb23d901159b4caa451f94ffa5685dde04cea167f4e96d7b1d8 Jan 21 14:50:56 crc kubenswrapper[4834]: I0121 14:50:56.028765 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c461d8-2b64-4737-b0df-da4bddde822d-operator-scripts\") pod \"root-account-create-update-mjhbn\" (UID: \"57c461d8-2b64-4737-b0df-da4bddde822d\") " pod="openstack/root-account-create-update-mjhbn" Jan 21 14:50:56 crc kubenswrapper[4834]: I0121 14:50:56.029522 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ktp7\" (UniqueName: \"kubernetes.io/projected/57c461d8-2b64-4737-b0df-da4bddde822d-kube-api-access-5ktp7\") pod \"root-account-create-update-mjhbn\" (UID: \"57c461d8-2b64-4737-b0df-da4bddde822d\") " pod="openstack/root-account-create-update-mjhbn" Jan 21 14:50:56 crc kubenswrapper[4834]: I0121 14:50:56.030245 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c461d8-2b64-4737-b0df-da4bddde822d-operator-scripts\") pod \"root-account-create-update-mjhbn\" (UID: \"57c461d8-2b64-4737-b0df-da4bddde822d\") " pod="openstack/root-account-create-update-mjhbn" Jan 21 14:50:56 crc kubenswrapper[4834]: I0121 14:50:56.059691 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ktp7\" (UniqueName: \"kubernetes.io/projected/57c461d8-2b64-4737-b0df-da4bddde822d-kube-api-access-5ktp7\") pod \"root-account-create-update-mjhbn\" (UID: \"57c461d8-2b64-4737-b0df-da4bddde822d\") " pod="openstack/root-account-create-update-mjhbn" Jan 21 14:50:56 crc kubenswrapper[4834]: I0121 14:50:56.209287 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:50:56 crc kubenswrapper[4834]: I0121 14:50:56.231541 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mjhbn" Jan 21 14:50:56 crc kubenswrapper[4834]: I0121 14:50:56.359396 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"e366ab495ac8bbb23d901159b4caa451f94ffa5685dde04cea167f4e96d7b1d8"} Jan 21 14:50:56 crc kubenswrapper[4834]: I0121 14:50:56.776103 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mjhbn"] Jan 21 14:50:56 crc kubenswrapper[4834]: W0121 14:50:56.792180 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57c461d8_2b64_4737_b0df_da4bddde822d.slice/crio-23319f10d9a5d71ed76555efbf8ca688ab174b9c8ffd385f4cc7e8596b9e083e WatchSource:0}: Error finding container 23319f10d9a5d71ed76555efbf8ca688ab174b9c8ffd385f4cc7e8596b9e083e: Status 404 returned error can't find the container with id 23319f10d9a5d71ed76555efbf8ca688ab174b9c8ffd385f4cc7e8596b9e083e Jan 21 14:50:57 crc kubenswrapper[4834]: I0121 14:50:57.374828 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mjhbn" event={"ID":"57c461d8-2b64-4737-b0df-da4bddde822d","Type":"ContainerStarted","Data":"ac1ec9e0c07a964592444b918be2b80a83221c681fdd18500124a549b54bbe57"} Jan 21 14:50:57 crc kubenswrapper[4834]: I0121 14:50:57.374953 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mjhbn" event={"ID":"57c461d8-2b64-4737-b0df-da4bddde822d","Type":"ContainerStarted","Data":"23319f10d9a5d71ed76555efbf8ca688ab174b9c8ffd385f4cc7e8596b9e083e"} Jan 21 14:50:57 crc kubenswrapper[4834]: I0121 14:50:57.393200 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-mjhbn" podStartSLOduration=2.393177331 podStartE2EDuration="2.393177331s" podCreationTimestamp="2026-01-21 14:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:50:57.393055677 +0000 UTC m=+1203.367404732" watchObservedRunningTime="2026-01-21 14:50:57.393177331 +0000 UTC m=+1203.367526376" Jan 21 14:50:59 crc kubenswrapper[4834]: I0121 14:50:59.395834 4834 generic.go:334] "Generic (PLEG): container finished" podID="57c461d8-2b64-4737-b0df-da4bddde822d" containerID="ac1ec9e0c07a964592444b918be2b80a83221c681fdd18500124a549b54bbe57" exitCode=0 Jan 21 14:50:59 crc kubenswrapper[4834]: I0121 14:50:59.396055 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mjhbn" event={"ID":"57c461d8-2b64-4737-b0df-da4bddde822d","Type":"ContainerDied","Data":"ac1ec9e0c07a964592444b918be2b80a83221c681fdd18500124a549b54bbe57"} Jan 21 14:51:00 crc kubenswrapper[4834]: I0121 14:51:00.784793 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:51:00 crc kubenswrapper[4834]: I0121 14:51:00.819270 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:51:00 crc kubenswrapper[4834]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:51:00 crc kubenswrapper[4834]: > Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.074997 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9wtcs-config-59lft"] Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.076844 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.080210 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.084806 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wtcs-config-59lft"] Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.198438 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.198543 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-scripts\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.198603 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-log-ovn\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.198663 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run-ovn\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.198703 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-additional-scripts\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.198749 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2wf9\" (UniqueName: \"kubernetes.io/projected/66a84771-5b3a-4320-8f4c-d02c11d66c66-kube-api-access-x2wf9\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.300861 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-log-ovn\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.300985 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run-ovn\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.301025 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-additional-scripts\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.301097 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2wf9\" (UniqueName: \"kubernetes.io/projected/66a84771-5b3a-4320-8f4c-d02c11d66c66-kube-api-access-x2wf9\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.301386 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-log-ovn\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.301457 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run-ovn\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.301721 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.301819 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.301962 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-scripts\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.302094 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-additional-scripts\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.303829 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-scripts\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.325832 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2wf9\" (UniqueName: \"kubernetes.io/projected/66a84771-5b3a-4320-8f4c-d02c11d66c66-kube-api-access-x2wf9\") pod \"ovn-controller-9wtcs-config-59lft\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:01 crc kubenswrapper[4834]: I0121 14:51:01.415642 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:05 crc kubenswrapper[4834]: I0121 14:51:05.322616 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:51:05 crc kubenswrapper[4834]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:51:05 crc kubenswrapper[4834]: > Jan 21 14:51:05 crc kubenswrapper[4834]: I0121 14:51:05.377146 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.012715 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4whjk"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.013896 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.021916 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4whjk"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.112248 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-fsjzd"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.115677 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.127443 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fsjzd"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.141396 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1e5f-account-create-update-p7tv4"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.151053 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.158663 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.187987 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1e5f-account-create-update-p7tv4"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.212479 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115e04e0-b027-42bb-bdbf-f860ef73aef3-operator-scripts\") pod \"barbican-db-create-fsjzd\" (UID: \"115e04e0-b027-42bb-bdbf-f860ef73aef3\") " pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.212552 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbw6t\" (UniqueName: \"kubernetes.io/projected/dc890a7e-8c34-46c9-ae49-3e5117149f34-kube-api-access-qbw6t\") pod \"cinder-1e5f-account-create-update-p7tv4\" (UID: \"dc890a7e-8c34-46c9-ae49-3e5117149f34\") " pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.212590 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfqg\" (UniqueName: \"kubernetes.io/projected/115e04e0-b027-42bb-bdbf-f860ef73aef3-kube-api-access-whfqg\") pod \"barbican-db-create-fsjzd\" (UID: \"115e04e0-b027-42bb-bdbf-f860ef73aef3\") " pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.212662 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wjfb\" (UniqueName: \"kubernetes.io/projected/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-kube-api-access-2wjfb\") pod \"cinder-db-create-4whjk\" (UID: \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\") " pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.212732 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc890a7e-8c34-46c9-ae49-3e5117149f34-operator-scripts\") pod \"cinder-1e5f-account-create-update-p7tv4\" (UID: \"dc890a7e-8c34-46c9-ae49-3e5117149f34\") " pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.212760 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-operator-scripts\") pod \"cinder-db-create-4whjk\" (UID: \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\") " pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.232161 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8d6d-account-create-update-gp9sw"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.237003 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.241406 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.245395 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8d6d-account-create-update-gp9sw"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.312166 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-zbc8d"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.313543 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314339 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whfqg\" (UniqueName: \"kubernetes.io/projected/115e04e0-b027-42bb-bdbf-f860ef73aef3-kube-api-access-whfqg\") pod \"barbican-db-create-fsjzd\" (UID: \"115e04e0-b027-42bb-bdbf-f860ef73aef3\") " pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314392 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-config-data\") pod \"keystone-db-sync-zbc8d\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314425 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5p8l\" (UniqueName: \"kubernetes.io/projected/7775589a-98d8-4291-9c93-26bb67d1c99f-kube-api-access-q5p8l\") pod \"barbican-8d6d-account-create-update-gp9sw\" (UID: \"7775589a-98d8-4291-9c93-26bb67d1c99f\") " pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314553 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wjfb\" (UniqueName: \"kubernetes.io/projected/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-kube-api-access-2wjfb\") pod \"cinder-db-create-4whjk\" (UID: \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\") " pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314615 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7775589a-98d8-4291-9c93-26bb67d1c99f-operator-scripts\") pod \"barbican-8d6d-account-create-update-gp9sw\" (UID: \"7775589a-98d8-4291-9c93-26bb67d1c99f\") " pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314730 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc890a7e-8c34-46c9-ae49-3e5117149f34-operator-scripts\") pod \"cinder-1e5f-account-create-update-p7tv4\" (UID: \"dc890a7e-8c34-46c9-ae49-3e5117149f34\") " pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314773 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-operator-scripts\") pod \"cinder-db-create-4whjk\" (UID: \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\") " pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314836 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkz5t\" (UniqueName: \"kubernetes.io/projected/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-kube-api-access-pkz5t\") pod \"keystone-db-sync-zbc8d\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314871 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115e04e0-b027-42bb-bdbf-f860ef73aef3-operator-scripts\") pod \"barbican-db-create-fsjzd\" (UID: \"115e04e0-b027-42bb-bdbf-f860ef73aef3\") " pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314916 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-combined-ca-bundle\") pod \"keystone-db-sync-zbc8d\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.314994 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbw6t\" (UniqueName: \"kubernetes.io/projected/dc890a7e-8c34-46c9-ae49-3e5117149f34-kube-api-access-qbw6t\") pod \"cinder-1e5f-account-create-update-p7tv4\" (UID: \"dc890a7e-8c34-46c9-ae49-3e5117149f34\") " pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.315875 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-operator-scripts\") pod \"cinder-db-create-4whjk\" (UID: \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\") " pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.315975 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115e04e0-b027-42bb-bdbf-f860ef73aef3-operator-scripts\") pod \"barbican-db-create-fsjzd\" (UID: \"115e04e0-b027-42bb-bdbf-f860ef73aef3\") " pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.316015 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc890a7e-8c34-46c9-ae49-3e5117149f34-operator-scripts\") pod \"cinder-1e5f-account-create-update-p7tv4\" (UID: \"dc890a7e-8c34-46c9-ae49-3e5117149f34\") " pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.319484 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.319554 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5qqrx" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.319504 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.319771 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.334815 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zbc8d"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.342911 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbw6t\" (UniqueName: \"kubernetes.io/projected/dc890a7e-8c34-46c9-ae49-3e5117149f34-kube-api-access-qbw6t\") pod \"cinder-1e5f-account-create-update-p7tv4\" (UID: \"dc890a7e-8c34-46c9-ae49-3e5117149f34\") " pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.343440 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whfqg\" (UniqueName: \"kubernetes.io/projected/115e04e0-b027-42bb-bdbf-f860ef73aef3-kube-api-access-whfqg\") pod \"barbican-db-create-fsjzd\" (UID: \"115e04e0-b027-42bb-bdbf-f860ef73aef3\") " pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.357124 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wjfb\" (UniqueName: \"kubernetes.io/projected/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-kube-api-access-2wjfb\") pod \"cinder-db-create-4whjk\" (UID: \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\") " pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.462898 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.463471 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-config-data\") pod \"keystone-db-sync-zbc8d\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.463523 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5p8l\" (UniqueName: \"kubernetes.io/projected/7775589a-98d8-4291-9c93-26bb67d1c99f-kube-api-access-q5p8l\") pod \"barbican-8d6d-account-create-update-gp9sw\" (UID: \"7775589a-98d8-4291-9c93-26bb67d1c99f\") " pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.463632 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7775589a-98d8-4291-9c93-26bb67d1c99f-operator-scripts\") pod \"barbican-8d6d-account-create-update-gp9sw\" (UID: \"7775589a-98d8-4291-9c93-26bb67d1c99f\") " pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.463686 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkz5t\" (UniqueName: \"kubernetes.io/projected/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-kube-api-access-pkz5t\") pod \"keystone-db-sync-zbc8d\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.463729 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-combined-ca-bundle\") pod \"keystone-db-sync-zbc8d\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.465879 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7775589a-98d8-4291-9c93-26bb67d1c99f-operator-scripts\") pod \"barbican-8d6d-account-create-update-gp9sw\" (UID: \"7775589a-98d8-4291-9c93-26bb67d1c99f\") " pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.470913 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-combined-ca-bundle\") pod \"keystone-db-sync-zbc8d\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.484578 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-x6zbx"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.486344 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.490054 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-config-data\") pod \"keystone-db-sync-zbc8d\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.512853 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.521140 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x6zbx"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.536151 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bc05-account-create-update-l986q"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.537962 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.544623 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkz5t\" (UniqueName: \"kubernetes.io/projected/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-kube-api-access-pkz5t\") pod \"keystone-db-sync-zbc8d\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.545759 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.554399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5p8l\" (UniqueName: \"kubernetes.io/projected/7775589a-98d8-4291-9c93-26bb67d1c99f-kube-api-access-q5p8l\") pod \"barbican-8d6d-account-create-update-gp9sw\" (UID: \"7775589a-98d8-4291-9c93-26bb67d1c99f\") " pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.555400 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bc05-account-create-update-l986q"] Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.558617 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.732063 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.799496 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-operator-scripts\") pod \"neutron-db-create-x6zbx\" (UID: \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\") " pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.799664 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88sr\" (UniqueName: \"kubernetes.io/projected/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-kube-api-access-f88sr\") pod \"neutron-db-create-x6zbx\" (UID: \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\") " pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.799750 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgcss\" (UniqueName: \"kubernetes.io/projected/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-kube-api-access-wgcss\") pod \"neutron-bc05-account-create-update-l986q\" (UID: \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\") " pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.799828 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-operator-scripts\") pod \"neutron-bc05-account-create-update-l986q\" (UID: \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\") " pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.800873 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.903178 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f88sr\" (UniqueName: \"kubernetes.io/projected/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-kube-api-access-f88sr\") pod \"neutron-db-create-x6zbx\" (UID: \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\") " pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.903608 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgcss\" (UniqueName: \"kubernetes.io/projected/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-kube-api-access-wgcss\") pod \"neutron-bc05-account-create-update-l986q\" (UID: \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\") " pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.903640 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-operator-scripts\") pod \"neutron-bc05-account-create-update-l986q\" (UID: \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\") " pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.903786 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-operator-scripts\") pod \"neutron-db-create-x6zbx\" (UID: \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\") " pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.904681 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-operator-scripts\") pod \"neutron-bc05-account-create-update-l986q\" (UID: \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\") " pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.904805 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-operator-scripts\") pod \"neutron-db-create-x6zbx\" (UID: \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\") " pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.928568 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgcss\" (UniqueName: \"kubernetes.io/projected/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-kube-api-access-wgcss\") pod \"neutron-bc05-account-create-update-l986q\" (UID: \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\") " pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:06 crc kubenswrapper[4834]: I0121 14:51:06.930990 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f88sr\" (UniqueName: \"kubernetes.io/projected/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-kube-api-access-f88sr\") pod \"neutron-db-create-x6zbx\" (UID: \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\") " pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:07 crc kubenswrapper[4834]: I0121 14:51:07.168404 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:07 crc kubenswrapper[4834]: I0121 14:51:07.169466 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:10 crc kubenswrapper[4834]: I0121 14:51:10.281918 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:51:10 crc kubenswrapper[4834]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:51:10 crc kubenswrapper[4834]: > Jan 21 14:51:11 crc kubenswrapper[4834]: I0121 14:51:11.361252 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:51:11 crc kubenswrapper[4834]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 14:51:11 crc kubenswrapper[4834]: > Jan 21 14:51:11 crc kubenswrapper[4834]: I0121 14:51:11.468123 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nmfb6" podUID="872f6769-1a60-42d1-911d-0db9cfba03ce" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:51:13 crc kubenswrapper[4834]: E0121 14:51:13.313711 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Jan 21 14:51:13 crc kubenswrapper[4834]: E0121 14:51:13.314399 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62r85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-rkw29_openstack(b5abd5d5-addd-4b84-a301-86a55a7e23cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:51:13 crc kubenswrapper[4834]: E0121 14:51:13.315602 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-rkw29" podUID="b5abd5d5-addd-4b84-a301-86a55a7e23cf" Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.491822 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mjhbn" Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.597233 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c461d8-2b64-4737-b0df-da4bddde822d-operator-scripts\") pod \"57c461d8-2b64-4737-b0df-da4bddde822d\" (UID: \"57c461d8-2b64-4737-b0df-da4bddde822d\") " Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.597580 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ktp7\" (UniqueName: \"kubernetes.io/projected/57c461d8-2b64-4737-b0df-da4bddde822d-kube-api-access-5ktp7\") pod \"57c461d8-2b64-4737-b0df-da4bddde822d\" (UID: \"57c461d8-2b64-4737-b0df-da4bddde822d\") " Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.600027 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c461d8-2b64-4737-b0df-da4bddde822d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57c461d8-2b64-4737-b0df-da4bddde822d" (UID: "57c461d8-2b64-4737-b0df-da4bddde822d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.643747 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c461d8-2b64-4737-b0df-da4bddde822d-kube-api-access-5ktp7" (OuterVolumeSpecName: "kube-api-access-5ktp7") pod "57c461d8-2b64-4737-b0df-da4bddde822d" (UID: "57c461d8-2b64-4737-b0df-da4bddde822d"). InnerVolumeSpecName "kube-api-access-5ktp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.696688 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8d6d-account-create-update-gp9sw"] Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.713417 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57c461d8-2b64-4737-b0df-da4bddde822d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.713492 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ktp7\" (UniqueName: \"kubernetes.io/projected/57c461d8-2b64-4737-b0df-da4bddde822d-kube-api-access-5ktp7\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.885470 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314"} Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.892320 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mjhbn" event={"ID":"57c461d8-2b64-4737-b0df-da4bddde822d","Type":"ContainerDied","Data":"23319f10d9a5d71ed76555efbf8ca688ab174b9c8ffd385f4cc7e8596b9e083e"} Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.892369 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23319f10d9a5d71ed76555efbf8ca688ab174b9c8ffd385f4cc7e8596b9e083e" Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.892436 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mjhbn" Jan 21 14:51:13 crc kubenswrapper[4834]: I0121 14:51:13.900236 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d6d-account-create-update-gp9sw" event={"ID":"7775589a-98d8-4291-9c93-26bb67d1c99f","Type":"ContainerStarted","Data":"9b0b00106e5308d3f29d90343cb2912f205f48c492d216f1780c6898d3c26999"} Jan 21 14:51:13 crc kubenswrapper[4834]: E0121 14:51:13.903529 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-rkw29" podUID="b5abd5d5-addd-4b84-a301-86a55a7e23cf" Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.402433 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4whjk"] Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.431271 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bc05-account-create-update-l986q"] Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.445710 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1e5f-account-create-update-p7tv4"] Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.456610 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wtcs-config-59lft"] Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.468563 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-zbc8d"] Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.490638 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x6zbx"] Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.498875 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fsjzd"] Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.915318 4834 generic.go:334] "Generic (PLEG): container finished" podID="7775589a-98d8-4291-9c93-26bb67d1c99f" containerID="de2118a3c8532e3ed45fb4f40a6d807fde2e842586bd7cfe0287f3de051a0325" exitCode=0 Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.915775 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d6d-account-create-update-gp9sw" event={"ID":"7775589a-98d8-4291-9c93-26bb67d1c99f","Type":"ContainerDied","Data":"de2118a3c8532e3ed45fb4f40a6d807fde2e842586bd7cfe0287f3de051a0325"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.924912 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wtcs-config-59lft" event={"ID":"66a84771-5b3a-4320-8f4c-d02c11d66c66","Type":"ContainerStarted","Data":"383c05e156769911de64cce006521de1a0f7d2414c6973c7a0ab36a32d8a2828"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.924971 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wtcs-config-59lft" event={"ID":"66a84771-5b3a-4320-8f4c-d02c11d66c66","Type":"ContainerStarted","Data":"3e0d5595dbc3240aede9e5721d797ad6df2d560c35a574a7ac397f14494d7c61"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.940532 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1e5f-account-create-update-p7tv4" event={"ID":"dc890a7e-8c34-46c9-ae49-3e5117149f34","Type":"ContainerStarted","Data":"7450baf66953d71a3a8bda46db6cf374715a858243decdf6ef7582dec8f2a5b5"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.940597 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1e5f-account-create-update-p7tv4" event={"ID":"dc890a7e-8c34-46c9-ae49-3e5117149f34","Type":"ContainerStarted","Data":"6fb753d4e716259c6a322c188c00a0e6ab192edcf0922d084d30eeed675673e9"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.948421 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbc8d" event={"ID":"367e30a8-4fb2-47e5-a2f4-5e481d37fcca","Type":"ContainerStarted","Data":"93e7986416a4a320e19c41dc6cc16fc5bc63bd46597ea5700bef56e7ef47d14b"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.954609 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9wtcs-config-59lft" podStartSLOduration=13.954593371 podStartE2EDuration="13.954593371s" podCreationTimestamp="2026-01-21 14:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:14.953368162 +0000 UTC m=+1220.927717207" watchObservedRunningTime="2026-01-21 14:51:14.954593371 +0000 UTC m=+1220.928942416" Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.963626 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fsjzd" event={"ID":"115e04e0-b027-42bb-bdbf-f860ef73aef3","Type":"ContainerStarted","Data":"43faf0c44ed1cad6a34d28fbbeba8aa4bf7e8d3146039b96edf40f928b38b322"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.963680 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fsjzd" event={"ID":"115e04e0-b027-42bb-bdbf-f860ef73aef3","Type":"ContainerStarted","Data":"abe5528abe457fe5fb7e0cb76d51c7d9e0f9894b8bdf3ece73da8badcccb8d51"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.978337 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4whjk" event={"ID":"1491d4f8-0e00-406a-8e91-51d3dc0e5a68","Type":"ContainerStarted","Data":"42991a0486e1e8bb2be287a8f47cbf50c0e3abd28979cca6ae904abd97a4f38c"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.978391 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4whjk" event={"ID":"1491d4f8-0e00-406a-8e91-51d3dc0e5a68","Type":"ContainerStarted","Data":"457efdd88dcb4311e50ad71734c2adb2c8e487edacf06ff43fc049ce24c61271"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.980368 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-1e5f-account-create-update-p7tv4" podStartSLOduration=8.980346592 podStartE2EDuration="8.980346592s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:14.979305529 +0000 UTC m=+1220.953654574" watchObservedRunningTime="2026-01-21 14:51:14.980346592 +0000 UTC m=+1220.954695637" Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.980742 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x6zbx" event={"ID":"963497bf-0dd8-4d5c-a046-360dbfdaf2a6","Type":"ContainerStarted","Data":"eaed596558af2085827f8e984d860d3b77386778257ee96ee20a0f7e9596633a"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.980789 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x6zbx" event={"ID":"963497bf-0dd8-4d5c-a046-360dbfdaf2a6","Type":"ContainerStarted","Data":"adb8de8147c5fb698502c203240964521d19aa8f97657c661a72a1fb6cb817a5"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.983028 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc05-account-create-update-l986q" event={"ID":"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a","Type":"ContainerStarted","Data":"5edbdeba0c4bdc90ae3b3facbfb2bc42083627d57cdf54fede1603200c831d27"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.983062 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc05-account-create-update-l986q" event={"ID":"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a","Type":"ContainerStarted","Data":"e29dd69ca94f32b9e498ba60eba9fb52b37e7eed3ae53f1c33c595b82fdca721"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.986209 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.986249 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b"} Jan 21 14:51:14 crc kubenswrapper[4834]: I0121 14:51:14.987768 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398"} Jan 21 14:51:15 crc kubenswrapper[4834]: I0121 14:51:15.000578 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-fsjzd" podStartSLOduration=9.000551059 podStartE2EDuration="9.000551059s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:14.99833436 +0000 UTC m=+1220.972683415" watchObservedRunningTime="2026-01-21 14:51:15.000551059 +0000 UTC m=+1220.974900104" Jan 21 14:51:15 crc kubenswrapper[4834]: I0121 14:51:15.025958 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-4whjk" podStartSLOduration=10.025913798 podStartE2EDuration="10.025913798s" podCreationTimestamp="2026-01-21 14:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:15.017389503 +0000 UTC m=+1220.991738548" watchObservedRunningTime="2026-01-21 14:51:15.025913798 +0000 UTC m=+1221.000262843" Jan 21 14:51:15 crc kubenswrapper[4834]: I0121 14:51:15.048692 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bc05-account-create-update-l986q" podStartSLOduration=9.048648415 podStartE2EDuration="9.048648415s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:15.039585053 +0000 UTC m=+1221.013934098" watchObservedRunningTime="2026-01-21 14:51:15.048648415 +0000 UTC m=+1221.022997460" Jan 21 14:51:15 crc kubenswrapper[4834]: I0121 14:51:15.066841 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-x6zbx" podStartSLOduration=9.06682575 podStartE2EDuration="9.06682575s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:15.060007048 +0000 UTC m=+1221.034356093" watchObservedRunningTime="2026-01-21 14:51:15.06682575 +0000 UTC m=+1221.041174795" Jan 21 14:51:15 crc kubenswrapper[4834]: I0121 14:51:15.326803 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9wtcs" Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.011676 4834 generic.go:334] "Generic (PLEG): container finished" podID="dc890a7e-8c34-46c9-ae49-3e5117149f34" containerID="7450baf66953d71a3a8bda46db6cf374715a858243decdf6ef7582dec8f2a5b5" exitCode=0 Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.011755 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1e5f-account-create-update-p7tv4" event={"ID":"dc890a7e-8c34-46c9-ae49-3e5117149f34","Type":"ContainerDied","Data":"7450baf66953d71a3a8bda46db6cf374715a858243decdf6ef7582dec8f2a5b5"} Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.013687 4834 generic.go:334] "Generic (PLEG): container finished" podID="ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a" containerID="5edbdeba0c4bdc90ae3b3facbfb2bc42083627d57cdf54fede1603200c831d27" exitCode=0 Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.013733 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc05-account-create-update-l986q" event={"ID":"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a","Type":"ContainerDied","Data":"5edbdeba0c4bdc90ae3b3facbfb2bc42083627d57cdf54fede1603200c831d27"} Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.015095 4834 generic.go:334] "Generic (PLEG): container finished" podID="115e04e0-b027-42bb-bdbf-f860ef73aef3" containerID="43faf0c44ed1cad6a34d28fbbeba8aa4bf7e8d3146039b96edf40f928b38b322" exitCode=0 Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.015155 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fsjzd" event={"ID":"115e04e0-b027-42bb-bdbf-f860ef73aef3","Type":"ContainerDied","Data":"43faf0c44ed1cad6a34d28fbbeba8aa4bf7e8d3146039b96edf40f928b38b322"} Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.016896 4834 generic.go:334] "Generic (PLEG): container finished" podID="1491d4f8-0e00-406a-8e91-51d3dc0e5a68" containerID="42991a0486e1e8bb2be287a8f47cbf50c0e3abd28979cca6ae904abd97a4f38c" exitCode=0 Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.017067 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4whjk" event={"ID":"1491d4f8-0e00-406a-8e91-51d3dc0e5a68","Type":"ContainerDied","Data":"42991a0486e1e8bb2be287a8f47cbf50c0e3abd28979cca6ae904abd97a4f38c"} Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.023414 4834 generic.go:334] "Generic (PLEG): container finished" podID="963497bf-0dd8-4d5c-a046-360dbfdaf2a6" containerID="eaed596558af2085827f8e984d860d3b77386778257ee96ee20a0f7e9596633a" exitCode=0 Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.023483 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x6zbx" event={"ID":"963497bf-0dd8-4d5c-a046-360dbfdaf2a6","Type":"ContainerDied","Data":"eaed596558af2085827f8e984d860d3b77386778257ee96ee20a0f7e9596633a"} Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.027471 4834 generic.go:334] "Generic (PLEG): container finished" podID="66a84771-5b3a-4320-8f4c-d02c11d66c66" containerID="383c05e156769911de64cce006521de1a0f7d2414c6973c7a0ab36a32d8a2828" exitCode=0 Jan 21 14:51:16 crc kubenswrapper[4834]: I0121 14:51:16.027528 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wtcs-config-59lft" event={"ID":"66a84771-5b3a-4320-8f4c-d02c11d66c66","Type":"ContainerDied","Data":"383c05e156769911de64cce006521de1a0f7d2414c6973c7a0ab36a32d8a2828"} Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.132166 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x6zbx" event={"ID":"963497bf-0dd8-4d5c-a046-360dbfdaf2a6","Type":"ContainerDied","Data":"adb8de8147c5fb698502c203240964521d19aa8f97657c661a72a1fb6cb817a5"} Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.132696 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb8de8147c5fb698502c203240964521d19aa8f97657c661a72a1fb6cb817a5" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.134338 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fsjzd" event={"ID":"115e04e0-b027-42bb-bdbf-f860ef73aef3","Type":"ContainerDied","Data":"abe5528abe457fe5fb7e0cb76d51c7d9e0f9894b8bdf3ece73da8badcccb8d51"} Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.134394 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abe5528abe457fe5fb7e0cb76d51c7d9e0f9894b8bdf3ece73da8badcccb8d51" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.202375 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.209580 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.359487 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-operator-scripts\") pod \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\" (UID: \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\") " Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.359877 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f88sr\" (UniqueName: \"kubernetes.io/projected/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-kube-api-access-f88sr\") pod \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\" (UID: \"963497bf-0dd8-4d5c-a046-360dbfdaf2a6\") " Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.359914 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115e04e0-b027-42bb-bdbf-f860ef73aef3-operator-scripts\") pod \"115e04e0-b027-42bb-bdbf-f860ef73aef3\" (UID: \"115e04e0-b027-42bb-bdbf-f860ef73aef3\") " Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.360016 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whfqg\" (UniqueName: \"kubernetes.io/projected/115e04e0-b027-42bb-bdbf-f860ef73aef3-kube-api-access-whfqg\") pod \"115e04e0-b027-42bb-bdbf-f860ef73aef3\" (UID: \"115e04e0-b027-42bb-bdbf-f860ef73aef3\") " Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.360764 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115e04e0-b027-42bb-bdbf-f860ef73aef3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "115e04e0-b027-42bb-bdbf-f860ef73aef3" (UID: "115e04e0-b027-42bb-bdbf-f860ef73aef3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.360807 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "963497bf-0dd8-4d5c-a046-360dbfdaf2a6" (UID: "963497bf-0dd8-4d5c-a046-360dbfdaf2a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.366214 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115e04e0-b027-42bb-bdbf-f860ef73aef3-kube-api-access-whfqg" (OuterVolumeSpecName: "kube-api-access-whfqg") pod "115e04e0-b027-42bb-bdbf-f860ef73aef3" (UID: "115e04e0-b027-42bb-bdbf-f860ef73aef3"). InnerVolumeSpecName "kube-api-access-whfqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.366588 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-kube-api-access-f88sr" (OuterVolumeSpecName: "kube-api-access-f88sr") pod "963497bf-0dd8-4d5c-a046-360dbfdaf2a6" (UID: "963497bf-0dd8-4d5c-a046-360dbfdaf2a6"). InnerVolumeSpecName "kube-api-access-f88sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.466324 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.466373 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f88sr\" (UniqueName: \"kubernetes.io/projected/963497bf-0dd8-4d5c-a046-360dbfdaf2a6-kube-api-access-f88sr\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.466386 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115e04e0-b027-42bb-bdbf-f860ef73aef3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:22 crc kubenswrapper[4834]: I0121 14:51:22.466396 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whfqg\" (UniqueName: \"kubernetes.io/projected/115e04e0-b027-42bb-bdbf-f860ef73aef3-kube-api-access-whfqg\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:23 crc kubenswrapper[4834]: I0121 14:51:23.144208 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fsjzd" Jan 21 14:51:23 crc kubenswrapper[4834]: I0121 14:51:23.144228 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x6zbx" Jan 21 14:51:24 crc kubenswrapper[4834]: I0121 14:51:24.818888 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:24 crc kubenswrapper[4834]: I0121 14:51:24.845973 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:24 crc kubenswrapper[4834]: I0121 14:51:24.883227 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:24 crc kubenswrapper[4834]: I0121 14:51:24.894784 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:24 crc kubenswrapper[4834]: I0121 14:51:24.904674 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.014895 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-scripts\") pod \"66a84771-5b3a-4320-8f4c-d02c11d66c66\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015017 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2wf9\" (UniqueName: \"kubernetes.io/projected/66a84771-5b3a-4320-8f4c-d02c11d66c66-kube-api-access-x2wf9\") pod \"66a84771-5b3a-4320-8f4c-d02c11d66c66\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015075 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc890a7e-8c34-46c9-ae49-3e5117149f34-operator-scripts\") pod \"dc890a7e-8c34-46c9-ae49-3e5117149f34\" (UID: \"dc890a7e-8c34-46c9-ae49-3e5117149f34\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015097 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-additional-scripts\") pod \"66a84771-5b3a-4320-8f4c-d02c11d66c66\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015119 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run-ovn\") pod \"66a84771-5b3a-4320-8f4c-d02c11d66c66\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015151 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-log-ovn\") pod \"66a84771-5b3a-4320-8f4c-d02c11d66c66\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015184 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-operator-scripts\") pod \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\" (UID: \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015217 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-operator-scripts\") pod \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\" (UID: \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015255 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wjfb\" (UniqueName: \"kubernetes.io/projected/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-kube-api-access-2wjfb\") pod \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\" (UID: \"1491d4f8-0e00-406a-8e91-51d3dc0e5a68\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015286 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7775589a-98d8-4291-9c93-26bb67d1c99f-operator-scripts\") pod \"7775589a-98d8-4291-9c93-26bb67d1c99f\" (UID: \"7775589a-98d8-4291-9c93-26bb67d1c99f\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015316 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbw6t\" (UniqueName: \"kubernetes.io/projected/dc890a7e-8c34-46c9-ae49-3e5117149f34-kube-api-access-qbw6t\") pod \"dc890a7e-8c34-46c9-ae49-3e5117149f34\" (UID: \"dc890a7e-8c34-46c9-ae49-3e5117149f34\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015367 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgcss\" (UniqueName: \"kubernetes.io/projected/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-kube-api-access-wgcss\") pod \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\" (UID: \"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015406 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run\") pod \"66a84771-5b3a-4320-8f4c-d02c11d66c66\" (UID: \"66a84771-5b3a-4320-8f4c-d02c11d66c66\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.015430 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5p8l\" (UniqueName: \"kubernetes.io/projected/7775589a-98d8-4291-9c93-26bb67d1c99f-kube-api-access-q5p8l\") pod \"7775589a-98d8-4291-9c93-26bb67d1c99f\" (UID: \"7775589a-98d8-4291-9c93-26bb67d1c99f\") " Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.016335 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "66a84771-5b3a-4320-8f4c-d02c11d66c66" (UID: "66a84771-5b3a-4320-8f4c-d02c11d66c66"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.016516 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "66a84771-5b3a-4320-8f4c-d02c11d66c66" (UID: "66a84771-5b3a-4320-8f4c-d02c11d66c66"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.016550 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run" (OuterVolumeSpecName: "var-run") pod "66a84771-5b3a-4320-8f4c-d02c11d66c66" (UID: "66a84771-5b3a-4320-8f4c-d02c11d66c66"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.017043 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7775589a-98d8-4291-9c93-26bb67d1c99f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7775589a-98d8-4291-9c93-26bb67d1c99f" (UID: "7775589a-98d8-4291-9c93-26bb67d1c99f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.017064 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1491d4f8-0e00-406a-8e91-51d3dc0e5a68" (UID: "1491d4f8-0e00-406a-8e91-51d3dc0e5a68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.017261 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc890a7e-8c34-46c9-ae49-3e5117149f34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc890a7e-8c34-46c9-ae49-3e5117149f34" (UID: "dc890a7e-8c34-46c9-ae49-3e5117149f34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.017624 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a" (UID: "ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.018599 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "66a84771-5b3a-4320-8f4c-d02c11d66c66" (UID: "66a84771-5b3a-4320-8f4c-d02c11d66c66"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.019750 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-scripts" (OuterVolumeSpecName: "scripts") pod "66a84771-5b3a-4320-8f4c-d02c11d66c66" (UID: "66a84771-5b3a-4320-8f4c-d02c11d66c66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.020325 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7775589a-98d8-4291-9c93-26bb67d1c99f-kube-api-access-q5p8l" (OuterVolumeSpecName: "kube-api-access-q5p8l") pod "7775589a-98d8-4291-9c93-26bb67d1c99f" (UID: "7775589a-98d8-4291-9c93-26bb67d1c99f"). InnerVolumeSpecName "kube-api-access-q5p8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.020396 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc890a7e-8c34-46c9-ae49-3e5117149f34-kube-api-access-qbw6t" (OuterVolumeSpecName: "kube-api-access-qbw6t") pod "dc890a7e-8c34-46c9-ae49-3e5117149f34" (UID: "dc890a7e-8c34-46c9-ae49-3e5117149f34"). InnerVolumeSpecName "kube-api-access-qbw6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.021292 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-kube-api-access-2wjfb" (OuterVolumeSpecName: "kube-api-access-2wjfb") pod "1491d4f8-0e00-406a-8e91-51d3dc0e5a68" (UID: "1491d4f8-0e00-406a-8e91-51d3dc0e5a68"). InnerVolumeSpecName "kube-api-access-2wjfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.021379 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-kube-api-access-wgcss" (OuterVolumeSpecName: "kube-api-access-wgcss") pod "ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a" (UID: "ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a"). InnerVolumeSpecName "kube-api-access-wgcss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.021883 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a84771-5b3a-4320-8f4c-d02c11d66c66-kube-api-access-x2wf9" (OuterVolumeSpecName: "kube-api-access-x2wf9") pod "66a84771-5b3a-4320-8f4c-d02c11d66c66" (UID: "66a84771-5b3a-4320-8f4c-d02c11d66c66"). InnerVolumeSpecName "kube-api-access-x2wf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120392 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wjfb\" (UniqueName: \"kubernetes.io/projected/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-kube-api-access-2wjfb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120441 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7775589a-98d8-4291-9c93-26bb67d1c99f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120453 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbw6t\" (UniqueName: \"kubernetes.io/projected/dc890a7e-8c34-46c9-ae49-3e5117149f34-kube-api-access-qbw6t\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120465 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgcss\" (UniqueName: \"kubernetes.io/projected/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-kube-api-access-wgcss\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120475 4834 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120484 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5p8l\" (UniqueName: \"kubernetes.io/projected/7775589a-98d8-4291-9c93-26bb67d1c99f-kube-api-access-q5p8l\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120493 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120502 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2wf9\" (UniqueName: \"kubernetes.io/projected/66a84771-5b3a-4320-8f4c-d02c11d66c66-kube-api-access-x2wf9\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120513 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc890a7e-8c34-46c9-ae49-3e5117149f34-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120521 4834 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66a84771-5b3a-4320-8f4c-d02c11d66c66-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120529 4834 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120540 4834 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66a84771-5b3a-4320-8f4c-d02c11d66c66-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120548 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.120557 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1491d4f8-0e00-406a-8e91-51d3dc0e5a68-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.172677 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbc8d" event={"ID":"367e30a8-4fb2-47e5-a2f4-5e481d37fcca","Type":"ContainerStarted","Data":"ff5f1996000fd126aeb8d7e3152bd522162d4bf437847b5af7b3bc4102ac459e"} Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.178102 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc05-account-create-update-l986q" event={"ID":"ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a","Type":"ContainerDied","Data":"e29dd69ca94f32b9e498ba60eba9fb52b37e7eed3ae53f1c33c595b82fdca721"} Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.178182 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e29dd69ca94f32b9e498ba60eba9fb52b37e7eed3ae53f1c33c595b82fdca721" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.178188 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc05-account-create-update-l986q" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.181816 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d6d-account-create-update-gp9sw" event={"ID":"7775589a-98d8-4291-9c93-26bb67d1c99f","Type":"ContainerDied","Data":"9b0b00106e5308d3f29d90343cb2912f205f48c492d216f1780c6898d3c26999"} Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.181856 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b0b00106e5308d3f29d90343cb2912f205f48c492d216f1780c6898d3c26999" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.181919 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d6d-account-create-update-gp9sw" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.186049 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5"} Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.190487 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4whjk" event={"ID":"1491d4f8-0e00-406a-8e91-51d3dc0e5a68","Type":"ContainerDied","Data":"457efdd88dcb4311e50ad71734c2adb2c8e487edacf06ff43fc049ce24c61271"} Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.190533 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457efdd88dcb4311e50ad71734c2adb2c8e487edacf06ff43fc049ce24c61271" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.190683 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4whjk" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.196830 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wtcs-config-59lft" event={"ID":"66a84771-5b3a-4320-8f4c-d02c11d66c66","Type":"ContainerDied","Data":"3e0d5595dbc3240aede9e5721d797ad6df2d560c35a574a7ac397f14494d7c61"} Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.196884 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e0d5595dbc3240aede9e5721d797ad6df2d560c35a574a7ac397f14494d7c61" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.196989 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wtcs-config-59lft" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.200621 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1e5f-account-create-update-p7tv4" event={"ID":"dc890a7e-8c34-46c9-ae49-3e5117149f34","Type":"ContainerDied","Data":"6fb753d4e716259c6a322c188c00a0e6ab192edcf0922d084d30eeed675673e9"} Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.200683 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fb753d4e716259c6a322c188c00a0e6ab192edcf0922d084d30eeed675673e9" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.200687 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1e5f-account-create-update-p7tv4" Jan 21 14:51:25 crc kubenswrapper[4834]: I0121 14:51:25.210721 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-zbc8d" podStartSLOduration=8.970382831 podStartE2EDuration="19.210697003s" podCreationTimestamp="2026-01-21 14:51:06 +0000 UTC" firstStartedPulling="2026-01-21 14:51:14.403233388 +0000 UTC m=+1220.377582433" lastFinishedPulling="2026-01-21 14:51:24.64354756 +0000 UTC m=+1230.617896605" observedRunningTime="2026-01-21 14:51:25.195189602 +0000 UTC m=+1231.169538647" watchObservedRunningTime="2026-01-21 14:51:25.210697003 +0000 UTC m=+1231.185046048" Jan 21 14:51:26 crc kubenswrapper[4834]: I0121 14:51:26.037000 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9wtcs-config-59lft"] Jan 21 14:51:26 crc kubenswrapper[4834]: I0121 14:51:26.055644 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9wtcs-config-59lft"] Jan 21 14:51:26 crc kubenswrapper[4834]: I0121 14:51:26.283294 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785"} Jan 21 14:51:26 crc kubenswrapper[4834]: I0121 14:51:26.283334 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453"} Jan 21 14:51:26 crc kubenswrapper[4834]: I0121 14:51:26.283345 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348"} Jan 21 14:51:26 crc kubenswrapper[4834]: I0121 14:51:26.337073 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a84771-5b3a-4320-8f4c-d02c11d66c66" path="/var/lib/kubelet/pods/66a84771-5b3a-4320-8f4c-d02c11d66c66/volumes" Jan 21 14:51:27 crc kubenswrapper[4834]: I0121 14:51:27.293071 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rkw29" event={"ID":"b5abd5d5-addd-4b84-a301-86a55a7e23cf","Type":"ContainerStarted","Data":"7f37d746c1e23773ff5721e4e997cd19227ff7f8c3be0289160cf0c721ed6064"} Jan 21 14:51:27 crc kubenswrapper[4834]: I0121 14:51:27.314701 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rkw29" podStartSLOduration=3.585839041 podStartE2EDuration="38.314679111s" podCreationTimestamp="2026-01-21 14:50:49 +0000 UTC" firstStartedPulling="2026-01-21 14:50:51.035045504 +0000 UTC m=+1197.009394549" lastFinishedPulling="2026-01-21 14:51:25.763885564 +0000 UTC m=+1231.738234619" observedRunningTime="2026-01-21 14:51:27.308738426 +0000 UTC m=+1233.283087471" watchObservedRunningTime="2026-01-21 14:51:27.314679111 +0000 UTC m=+1233.289028156" Jan 21 14:51:28 crc kubenswrapper[4834]: I0121 14:51:28.305556 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01"} Jan 21 14:51:28 crc kubenswrapper[4834]: I0121 14:51:28.305614 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc"} Jan 21 14:51:28 crc kubenswrapper[4834]: I0121 14:51:28.305623 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693"} Jan 21 14:51:29 crc kubenswrapper[4834]: I0121 14:51:29.335867 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047"} Jan 21 14:51:29 crc kubenswrapper[4834]: I0121 14:51:29.336261 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27"} Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.347530 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5"} Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.347951 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerStarted","Data":"fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f"} Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.679880 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.484639857 podStartE2EDuration="1m8.67985442s" podCreationTimestamp="2026-01-21 14:50:22 +0000 UTC" firstStartedPulling="2026-01-21 14:50:55.939345398 +0000 UTC m=+1201.913694433" lastFinishedPulling="2026-01-21 14:51:27.134559951 +0000 UTC m=+1233.108908996" observedRunningTime="2026-01-21 14:51:30.3813797 +0000 UTC m=+1236.355728755" watchObservedRunningTime="2026-01-21 14:51:30.67985442 +0000 UTC m=+1236.654203465" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.685847 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hbnc5"] Jan 21 14:51:30 crc kubenswrapper[4834]: E0121 14:51:30.686566 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a84771-5b3a-4320-8f4c-d02c11d66c66" containerName="ovn-config" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.686658 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a84771-5b3a-4320-8f4c-d02c11d66c66" containerName="ovn-config" Jan 21 14:51:30 crc kubenswrapper[4834]: E0121 14:51:30.686729 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7775589a-98d8-4291-9c93-26bb67d1c99f" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.686796 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7775589a-98d8-4291-9c93-26bb67d1c99f" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: E0121 14:51:30.686869 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1491d4f8-0e00-406a-8e91-51d3dc0e5a68" containerName="mariadb-database-create" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.686958 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1491d4f8-0e00-406a-8e91-51d3dc0e5a68" containerName="mariadb-database-create" Jan 21 14:51:30 crc kubenswrapper[4834]: E0121 14:51:30.687032 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc890a7e-8c34-46c9-ae49-3e5117149f34" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.687097 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc890a7e-8c34-46c9-ae49-3e5117149f34" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: E0121 14:51:30.687169 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.687232 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: E0121 14:51:30.687295 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c461d8-2b64-4737-b0df-da4bddde822d" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.687389 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c461d8-2b64-4737-b0df-da4bddde822d" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: E0121 14:51:30.687450 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963497bf-0dd8-4d5c-a046-360dbfdaf2a6" containerName="mariadb-database-create" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.687510 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="963497bf-0dd8-4d5c-a046-360dbfdaf2a6" containerName="mariadb-database-create" Jan 21 14:51:30 crc kubenswrapper[4834]: E0121 14:51:30.687573 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115e04e0-b027-42bb-bdbf-f860ef73aef3" containerName="mariadb-database-create" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.687630 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="115e04e0-b027-42bb-bdbf-f860ef73aef3" containerName="mariadb-database-create" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.687843 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc890a7e-8c34-46c9-ae49-3e5117149f34" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.687942 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c461d8-2b64-4737-b0df-da4bddde822d" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.688009 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="1491d4f8-0e00-406a-8e91-51d3dc0e5a68" containerName="mariadb-database-create" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.688077 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a84771-5b3a-4320-8f4c-d02c11d66c66" containerName="ovn-config" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.688144 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.688210 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="7775589a-98d8-4291-9c93-26bb67d1c99f" containerName="mariadb-account-create-update" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.688271 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="115e04e0-b027-42bb-bdbf-f860ef73aef3" containerName="mariadb-database-create" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.688352 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="963497bf-0dd8-4d5c-a046-360dbfdaf2a6" containerName="mariadb-database-create" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.689350 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.692154 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.709942 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hbnc5"] Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.821695 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-svc\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.821775 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.821998 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcb45\" (UniqueName: \"kubernetes.io/projected/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-kube-api-access-pcb45\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.822069 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.822127 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.822155 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-config\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.923849 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-svc\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.924219 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.924364 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcb45\" (UniqueName: \"kubernetes.io/projected/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-kube-api-access-pcb45\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.924451 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.924608 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.924714 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-config\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.925225 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-svc\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.925490 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.925503 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.925637 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.925693 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-config\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:30 crc kubenswrapper[4834]: I0121 14:51:30.968900 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcb45\" (UniqueName: \"kubernetes.io/projected/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-kube-api-access-pcb45\") pod \"dnsmasq-dns-8db84466c-hbnc5\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:31 crc kubenswrapper[4834]: I0121 14:51:31.013353 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:31 crc kubenswrapper[4834]: I0121 14:51:31.357784 4834 generic.go:334] "Generic (PLEG): container finished" podID="367e30a8-4fb2-47e5-a2f4-5e481d37fcca" containerID="ff5f1996000fd126aeb8d7e3152bd522162d4bf437847b5af7b3bc4102ac459e" exitCode=0 Jan 21 14:51:31 crc kubenswrapper[4834]: I0121 14:51:31.359401 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbc8d" event={"ID":"367e30a8-4fb2-47e5-a2f4-5e481d37fcca","Type":"ContainerDied","Data":"ff5f1996000fd126aeb8d7e3152bd522162d4bf437847b5af7b3bc4102ac459e"} Jan 21 14:51:31 crc kubenswrapper[4834]: I0121 14:51:31.521364 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hbnc5"] Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.369240 4834 generic.go:334] "Generic (PLEG): container finished" podID="f3f18a1f-904a-4d7e-a843-26aaa8c562c8" containerID="cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0" exitCode=0 Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.369336 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" event={"ID":"f3f18a1f-904a-4d7e-a843-26aaa8c562c8","Type":"ContainerDied","Data":"cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0"} Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.369661 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" event={"ID":"f3f18a1f-904a-4d7e-a843-26aaa8c562c8","Type":"ContainerStarted","Data":"2271df336aceb488a1dd7601dfd74b72d030da2325081c3c76db3d8749556b24"} Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.664476 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.777869 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-config-data\") pod \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.778425 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-combined-ca-bundle\") pod \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.778540 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkz5t\" (UniqueName: \"kubernetes.io/projected/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-kube-api-access-pkz5t\") pod \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\" (UID: \"367e30a8-4fb2-47e5-a2f4-5e481d37fcca\") " Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.790422 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-kube-api-access-pkz5t" (OuterVolumeSpecName: "kube-api-access-pkz5t") pod "367e30a8-4fb2-47e5-a2f4-5e481d37fcca" (UID: "367e30a8-4fb2-47e5-a2f4-5e481d37fcca"). InnerVolumeSpecName "kube-api-access-pkz5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.811692 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "367e30a8-4fb2-47e5-a2f4-5e481d37fcca" (UID: "367e30a8-4fb2-47e5-a2f4-5e481d37fcca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.847562 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-config-data" (OuterVolumeSpecName: "config-data") pod "367e30a8-4fb2-47e5-a2f4-5e481d37fcca" (UID: "367e30a8-4fb2-47e5-a2f4-5e481d37fcca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.880190 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkz5t\" (UniqueName: \"kubernetes.io/projected/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-kube-api-access-pkz5t\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.880232 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:32 crc kubenswrapper[4834]: I0121 14:51:32.880242 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367e30a8-4fb2-47e5-a2f4-5e481d37fcca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.386860 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-zbc8d" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.386906 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-zbc8d" event={"ID":"367e30a8-4fb2-47e5-a2f4-5e481d37fcca","Type":"ContainerDied","Data":"93e7986416a4a320e19c41dc6cc16fc5bc63bd46597ea5700bef56e7ef47d14b"} Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.387481 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93e7986416a4a320e19c41dc6cc16fc5bc63bd46597ea5700bef56e7ef47d14b" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.389765 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" event={"ID":"f3f18a1f-904a-4d7e-a843-26aaa8c562c8","Type":"ContainerStarted","Data":"8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1"} Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.389997 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.430413 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" podStartSLOduration=3.430394569 podStartE2EDuration="3.430394569s" podCreationTimestamp="2026-01-21 14:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:33.420218473 +0000 UTC m=+1239.394567518" watchObservedRunningTime="2026-01-21 14:51:33.430394569 +0000 UTC m=+1239.404743614" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.677332 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hbnc5"] Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.688367 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mq995"] Jan 21 14:51:33 crc kubenswrapper[4834]: E0121 14:51:33.688861 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367e30a8-4fb2-47e5-a2f4-5e481d37fcca" containerName="keystone-db-sync" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.688881 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="367e30a8-4fb2-47e5-a2f4-5e481d37fcca" containerName="keystone-db-sync" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.689119 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="367e30a8-4fb2-47e5-a2f4-5e481d37fcca" containerName="keystone-db-sync" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.689815 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.697234 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.697438 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.697485 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.697706 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.699356 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5qqrx" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.708327 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mq995"] Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.771501 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-767d96458c-6d9zc"] Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.773484 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.794954 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-fernet-keys\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.795004 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-config-data\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.795048 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-scripts\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.795086 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjl25\" (UniqueName: \"kubernetes.io/projected/582d71b6-2159-468f-90d8-81437c828959-kube-api-access-wjl25\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.795108 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-credential-keys\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.795153 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-combined-ca-bundle\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.824544 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-6d9zc"] Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898017 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898077 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-config\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898128 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-fernet-keys\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898161 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-config-data\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898193 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898222 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gdh2\" (UniqueName: \"kubernetes.io/projected/c6798ce4-621c-42c8-a077-eedf4d68ebce-kube-api-access-8gdh2\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898255 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-scripts\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898293 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjl25\" (UniqueName: \"kubernetes.io/projected/582d71b6-2159-468f-90d8-81437c828959-kube-api-access-wjl25\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898312 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898334 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-credential-keys\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898377 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-svc\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.898404 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-combined-ca-bundle\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.909977 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-combined-ca-bundle\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.910443 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-fernet-keys\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.911033 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-config-data\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.923399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-scripts\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.938444 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-credential-keys\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.955561 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjl25\" (UniqueName: \"kubernetes.io/projected/582d71b6-2159-468f-90d8-81437c828959-kube-api-access-wjl25\") pod \"keystone-bootstrap-mq995\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.973366 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.985403 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.997853 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.999669 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.999734 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-config\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.999794 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.999819 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gdh2\" (UniqueName: \"kubernetes.io/projected/c6798ce4-621c-42c8-a077-eedf4d68ebce-kube-api-access-8gdh2\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.999885 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:33 crc kubenswrapper[4834]: I0121 14:51:33.999958 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-svc\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.000818 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.001239 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bmr8w"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.003299 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.005184 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.006788 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.008770 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-config\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.008855 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.009375 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.013240 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-svc\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.019391 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-86c57" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.020511 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.020826 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bmr8w"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.020950 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.103502 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-run-httpd\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.103621 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z9lx\" (UniqueName: \"kubernetes.io/projected/8becb166-563c-43ec-8d07-567f51c39d64-kube-api-access-7z9lx\") pod \"neutron-db-sync-bmr8w\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.103666 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-scripts\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.103697 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2gt\" (UniqueName: \"kubernetes.io/projected/39e7e8b6-f56f-4c87-9f77-9969923cbe27-kube-api-access-dx2gt\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.103722 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-combined-ca-bundle\") pod \"neutron-db-sync-bmr8w\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.103752 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-config-data\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.103780 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.103807 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-config\") pod \"neutron-db-sync-bmr8w\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.104048 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.104093 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-log-httpd\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.113036 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.142581 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gdh2\" (UniqueName: \"kubernetes.io/projected/c6798ce4-621c-42c8-a077-eedf4d68ebce-kube-api-access-8gdh2\") pod \"dnsmasq-dns-767d96458c-6d9zc\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.207751 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.207810 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-log-httpd\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.207866 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-run-httpd\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.207902 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z9lx\" (UniqueName: \"kubernetes.io/projected/8becb166-563c-43ec-8d07-567f51c39d64-kube-api-access-7z9lx\") pod \"neutron-db-sync-bmr8w\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.207952 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-scripts\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.207974 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx2gt\" (UniqueName: \"kubernetes.io/projected/39e7e8b6-f56f-4c87-9f77-9969923cbe27-kube-api-access-dx2gt\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.207995 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-combined-ca-bundle\") pod \"neutron-db-sync-bmr8w\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.208022 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-config-data\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.208038 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.208061 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-config\") pod \"neutron-db-sync-bmr8w\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.209228 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-run-httpd\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.209517 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-log-httpd\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.211795 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-722hs"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.213891 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.224346 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cxmjg" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.225039 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.229477 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-combined-ca-bundle\") pod \"neutron-db-sync-bmr8w\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.235663 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-config\") pod \"neutron-db-sync-bmr8w\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.235663 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-scripts\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.240435 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-722hs"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.246679 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.246950 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-config-data\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.249428 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.273512 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.286852 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx2gt\" (UniqueName: \"kubernetes.io/projected/39e7e8b6-f56f-4c87-9f77-9969923cbe27-kube-api-access-dx2gt\") pod \"ceilometer-0\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.287914 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z9lx\" (UniqueName: \"kubernetes.io/projected/8becb166-563c-43ec-8d07-567f51c39d64-kube-api-access-7z9lx\") pod \"neutron-db-sync-bmr8w\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.297592 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-x8ctw"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.298881 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.304755 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.313704 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-config-data\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.313829 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed348d9-9d38-4546-a839-0930def4c9f3-logs\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.313869 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwtnh\" (UniqueName: \"kubernetes.io/projected/3ed348d9-9d38-4546-a839-0930def4c9f3-kube-api-access-vwtnh\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.313947 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-scripts\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.313988 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-combined-ca-bundle\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.316797 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.317147 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.325508 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x8ctw"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.327201 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qhxjf" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.393503 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-6d9zc"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.393555 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.398334 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.401371 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.417610 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.419318 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed348d9-9d38-4546-a839-0930def4c9f3-logs\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.420509 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed348d9-9d38-4546-a839-0930def4c9f3-logs\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.428435 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwtnh\" (UniqueName: \"kubernetes.io/projected/3ed348d9-9d38-4546-a839-0930def4c9f3-kube-api-access-vwtnh\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.429107 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-scripts\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.429151 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13827989-07c5-4417-9be2-574fbca9ddbb-etc-machine-id\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.429378 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-db-sync-config-data\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.429622 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-combined-ca-bundle\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.436794 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-combined-ca-bundle\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.437145 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-combined-ca-bundle\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.438052 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-scripts\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.438593 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-scripts\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.438561 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-config-data\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.439151 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6v6k\" (UniqueName: \"kubernetes.io/projected/13827989-07c5-4417-9be2-574fbca9ddbb-kube-api-access-g6v6k\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.439299 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-config-data\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.465301 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-5nwpj"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.467600 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.480334 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-config-data\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.488048 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5nwpj"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.493761 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.496795 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wjsjv" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.505045 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.518643 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwtnh\" (UniqueName: \"kubernetes.io/projected/3ed348d9-9d38-4546-a839-0930def4c9f3-kube-api-access-vwtnh\") pod \"placement-db-sync-722hs\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.542361 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-db-sync-config-data\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.542798 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-db-sync-config-data\") pod \"barbican-db-sync-5nwpj\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.542841 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt564\" (UniqueName: \"kubernetes.io/projected/9e3b5fdd-2a9d-4750-8013-081c1b410875-kube-api-access-kt564\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.542862 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-svc\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.542886 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmnkt\" (UniqueName: \"kubernetes.io/projected/2907abf2-1f9d-497d-bfb3-bf4094e7c174-kube-api-access-fmnkt\") pod \"barbican-db-sync-5nwpj\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.542912 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-config\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.543028 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-combined-ca-bundle\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.543060 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-scripts\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.543087 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-combined-ca-bundle\") pod \"barbican-db-sync-5nwpj\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.543112 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-swift-storage-0\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.543159 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6v6k\" (UniqueName: \"kubernetes.io/projected/13827989-07c5-4417-9be2-574fbca9ddbb-kube-api-access-g6v6k\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.543193 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-config-data\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.543217 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.543254 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.543302 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13827989-07c5-4417-9be2-574fbca9ddbb-etc-machine-id\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.553725 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-scripts\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.567037 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13827989-07c5-4417-9be2-574fbca9ddbb-etc-machine-id\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.590469 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-db-sync-config-data\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.593337 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-config-data\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.600720 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-combined-ca-bundle\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.617689 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6v6k\" (UniqueName: \"kubernetes.io/projected/13827989-07c5-4417-9be2-574fbca9ddbb-kube-api-access-g6v6k\") pod \"cinder-db-sync-x8ctw\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.644996 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.645069 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.645123 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-db-sync-config-data\") pod \"barbican-db-sync-5nwpj\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.645146 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt564\" (UniqueName: \"kubernetes.io/projected/9e3b5fdd-2a9d-4750-8013-081c1b410875-kube-api-access-kt564\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.645178 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-svc\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.645209 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmnkt\" (UniqueName: \"kubernetes.io/projected/2907abf2-1f9d-497d-bfb3-bf4094e7c174-kube-api-access-fmnkt\") pod \"barbican-db-sync-5nwpj\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.645862 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-config\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.645940 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-combined-ca-bundle\") pod \"barbican-db-sync-5nwpj\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.645968 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-swift-storage-0\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.646794 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-nb\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.646938 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-svc\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.647081 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-config\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.647735 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-sb\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.648432 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-swift-storage-0\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.653458 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-db-sync-config-data\") pod \"barbican-db-sync-5nwpj\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.654781 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-combined-ca-bundle\") pod \"barbican-db-sync-5nwpj\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.655528 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-722hs" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.682366 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.689305 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmnkt\" (UniqueName: \"kubernetes.io/projected/2907abf2-1f9d-497d-bfb3-bf4094e7c174-kube-api-access-fmnkt\") pod \"barbican-db-sync-5nwpj\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.690233 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt564\" (UniqueName: \"kubernetes.io/projected/9e3b5fdd-2a9d-4750-8013-081c1b410875-kube-api-access-kt564\") pod \"dnsmasq-dns-7fc6d4ffc7-dmgsj\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.706622 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mq995"] Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.779828 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:34 crc kubenswrapper[4834]: I0121 14:51:34.824604 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:51:35 crc kubenswrapper[4834]: W0121 14:51:35.233162 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8becb166_563c_43ec_8d07_567f51c39d64.slice/crio-4355a7d849025e80b90b10daf4cfd7d0ba41004d487563d428e5b471024087d0 WatchSource:0}: Error finding container 4355a7d849025e80b90b10daf4cfd7d0ba41004d487563d428e5b471024087d0: Status 404 returned error can't find the container with id 4355a7d849025e80b90b10daf4cfd7d0ba41004d487563d428e5b471024087d0 Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.253718 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bmr8w"] Jan 21 14:51:35 crc kubenswrapper[4834]: W0121 14:51:35.262548 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6798ce4_621c_42c8_a077_eedf4d68ebce.slice/crio-6ca0379055211ee5861b982fc2633763e6143d873c4d1c7ed3d5014286ff8579 WatchSource:0}: Error finding container 6ca0379055211ee5861b982fc2633763e6143d873c4d1c7ed3d5014286ff8579: Status 404 returned error can't find the container with id 6ca0379055211ee5861b982fc2633763e6143d873c4d1c7ed3d5014286ff8579 Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.265847 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-6d9zc"] Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.393250 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.466458 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-722hs"] Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.492460 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bmr8w" event={"ID":"8becb166-563c-43ec-8d07-567f51c39d64","Type":"ContainerStarted","Data":"4355a7d849025e80b90b10daf4cfd7d0ba41004d487563d428e5b471024087d0"} Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.497512 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x8ctw"] Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.500113 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-6d9zc" event={"ID":"c6798ce4-621c-42c8-a077-eedf4d68ebce","Type":"ContainerStarted","Data":"6ca0379055211ee5861b982fc2633763e6143d873c4d1c7ed3d5014286ff8579"} Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.508372 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mq995" event={"ID":"582d71b6-2159-468f-90d8-81437c828959","Type":"ContainerStarted","Data":"794676590f991a7ecdd6da4e45f2a58ee67264e8e686dd305a3cf7b02ffbff05"} Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.508416 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mq995" event={"ID":"582d71b6-2159-468f-90d8-81437c828959","Type":"ContainerStarted","Data":"bf6840290cf948770121585693df34326ebb2e0d08f166111f0ce5afcf947569"} Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.510191 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e7e8b6-f56f-4c87-9f77-9969923cbe27","Type":"ContainerStarted","Data":"df7c8df28f9144f756ea5fe91582865be91b26c702281417de012c2215d4572e"} Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.510327 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" podUID="f3f18a1f-904a-4d7e-a843-26aaa8c562c8" containerName="dnsmasq-dns" containerID="cri-o://8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1" gracePeriod=10 Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.548463 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mq995" podStartSLOduration=2.548435844 podStartE2EDuration="2.548435844s" podCreationTimestamp="2026-01-21 14:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:35.526530553 +0000 UTC m=+1241.500879598" watchObservedRunningTime="2026-01-21 14:51:35.548435844 +0000 UTC m=+1241.522784889" Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.657164 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj"] Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.673876 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5nwpj"] Jan 21 14:51:35 crc kubenswrapper[4834]: W0121 14:51:35.679061 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e3b5fdd_2a9d_4750_8013_081c1b410875.slice/crio-4d00fc20aad50c429ac53b788234505bd26517b08a697dab99a462178873d87b WatchSource:0}: Error finding container 4d00fc20aad50c429ac53b788234505bd26517b08a697dab99a462178873d87b: Status 404 returned error can't find the container with id 4d00fc20aad50c429ac53b788234505bd26517b08a697dab99a462178873d87b Jan 21 14:51:35 crc kubenswrapper[4834]: I0121 14:51:35.942896 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.085997 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-sb\") pod \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.086135 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcb45\" (UniqueName: \"kubernetes.io/projected/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-kube-api-access-pcb45\") pod \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.086175 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-svc\") pod \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.086210 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-swift-storage-0\") pod \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.086392 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-nb\") pod \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.086439 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-config\") pod \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\" (UID: \"f3f18a1f-904a-4d7e-a843-26aaa8c562c8\") " Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.114951 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-kube-api-access-pcb45" (OuterVolumeSpecName: "kube-api-access-pcb45") pod "f3f18a1f-904a-4d7e-a843-26aaa8c562c8" (UID: "f3f18a1f-904a-4d7e-a843-26aaa8c562c8"). InnerVolumeSpecName "kube-api-access-pcb45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.156153 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3f18a1f-904a-4d7e-a843-26aaa8c562c8" (UID: "f3f18a1f-904a-4d7e-a843-26aaa8c562c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.158347 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-config" (OuterVolumeSpecName: "config") pod "f3f18a1f-904a-4d7e-a843-26aaa8c562c8" (UID: "f3f18a1f-904a-4d7e-a843-26aaa8c562c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.188362 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.188391 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcb45\" (UniqueName: \"kubernetes.io/projected/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-kube-api-access-pcb45\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.188402 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.195061 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3f18a1f-904a-4d7e-a843-26aaa8c562c8" (UID: "f3f18a1f-904a-4d7e-a843-26aaa8c562c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.200878 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3f18a1f-904a-4d7e-a843-26aaa8c562c8" (UID: "f3f18a1f-904a-4d7e-a843-26aaa8c562c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.212443 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3f18a1f-904a-4d7e-a843-26aaa8c562c8" (UID: "f3f18a1f-904a-4d7e-a843-26aaa8c562c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.290235 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.291981 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.292031 4834 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3f18a1f-904a-4d7e-a843-26aaa8c562c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.522865 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5nwpj" event={"ID":"2907abf2-1f9d-497d-bfb3-bf4094e7c174","Type":"ContainerStarted","Data":"478feb9bed9d69e2ff284a1486bed4de2e91b35aa61e5b1ce974ac7d2cb30bd4"} Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.525553 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8ctw" event={"ID":"13827989-07c5-4417-9be2-574fbca9ddbb","Type":"ContainerStarted","Data":"69712b0520b8346fab377f4f1fedc16b5f1531a7ac0990161eb74f41e736e2be"} Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.529623 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bmr8w" event={"ID":"8becb166-563c-43ec-8d07-567f51c39d64","Type":"ContainerStarted","Data":"9e0eb4b587e1c4a06b4592384cbe89012e12116150f42a3c1da82d69989a8bf5"} Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.547299 4834 generic.go:334] "Generic (PLEG): container finished" podID="9e3b5fdd-2a9d-4750-8013-081c1b410875" containerID="68aacc6fdae5085b9f828a4853e9a747b1a92443ce7e365653aa84acf9fef628" exitCode=0 Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.547466 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" event={"ID":"9e3b5fdd-2a9d-4750-8013-081c1b410875","Type":"ContainerDied","Data":"68aacc6fdae5085b9f828a4853e9a747b1a92443ce7e365653aa84acf9fef628"} Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.547503 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" event={"ID":"9e3b5fdd-2a9d-4750-8013-081c1b410875","Type":"ContainerStarted","Data":"4d00fc20aad50c429ac53b788234505bd26517b08a697dab99a462178873d87b"} Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.554818 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bmr8w" podStartSLOduration=3.554781233 podStartE2EDuration="3.554781233s" podCreationTimestamp="2026-01-21 14:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:36.54727907 +0000 UTC m=+1242.521628115" watchObservedRunningTime="2026-01-21 14:51:36.554781233 +0000 UTC m=+1242.529130278" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.556381 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-722hs" event={"ID":"3ed348d9-9d38-4546-a839-0930def4c9f3","Type":"ContainerStarted","Data":"d539b91e41c282b87553734e3236b8dbb75918e52e4606ce8329299001cea2b7"} Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.613673 4834 generic.go:334] "Generic (PLEG): container finished" podID="c6798ce4-621c-42c8-a077-eedf4d68ebce" containerID="39a7ff8e19818015c742906e4789b0084ccb954361a0c04f8d9e337250689012" exitCode=0 Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.613766 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-6d9zc" event={"ID":"c6798ce4-621c-42c8-a077-eedf4d68ebce","Type":"ContainerDied","Data":"39a7ff8e19818015c742906e4789b0084ccb954361a0c04f8d9e337250689012"} Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.624789 4834 generic.go:334] "Generic (PLEG): container finished" podID="f3f18a1f-904a-4d7e-a843-26aaa8c562c8" containerID="8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1" exitCode=0 Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.625042 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.625665 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" event={"ID":"f3f18a1f-904a-4d7e-a843-26aaa8c562c8","Type":"ContainerDied","Data":"8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1"} Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.625711 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-hbnc5" event={"ID":"f3f18a1f-904a-4d7e-a843-26aaa8c562c8","Type":"ContainerDied","Data":"2271df336aceb488a1dd7601dfd74b72d030da2325081c3c76db3d8749556b24"} Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.625731 4834 scope.go:117] "RemoveContainer" containerID="8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.678006 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hbnc5"] Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.684334 4834 scope.go:117] "RemoveContainer" containerID="cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.685459 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-hbnc5"] Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.742248 4834 scope.go:117] "RemoveContainer" containerID="8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1" Jan 21 14:51:36 crc kubenswrapper[4834]: E0121 14:51:36.742920 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1\": container with ID starting with 8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1 not found: ID does not exist" containerID="8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.742973 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1"} err="failed to get container status \"8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1\": rpc error: code = NotFound desc = could not find container \"8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1\": container with ID starting with 8cd44e6c44f5ec6cc7d49e5df8e5ead9aba4a4a15e34ab7f769761b9439ed5c1 not found: ID does not exist" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.742997 4834 scope.go:117] "RemoveContainer" containerID="cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0" Jan 21 14:51:36 crc kubenswrapper[4834]: E0121 14:51:36.744196 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0\": container with ID starting with cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0 not found: ID does not exist" containerID="cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0" Jan 21 14:51:36 crc kubenswrapper[4834]: I0121 14:51:36.744237 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0"} err="failed to get container status \"cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0\": rpc error: code = NotFound desc = could not find container \"cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0\": container with ID starting with cb2007361cf761a02613c91d5acd34cba1f735c8ee147a58439ec25179c7f4b0 not found: ID does not exist" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.188575 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.313425 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-swift-storage-0\") pod \"c6798ce4-621c-42c8-a077-eedf4d68ebce\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.313558 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gdh2\" (UniqueName: \"kubernetes.io/projected/c6798ce4-621c-42c8-a077-eedf4d68ebce-kube-api-access-8gdh2\") pod \"c6798ce4-621c-42c8-a077-eedf4d68ebce\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.313586 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-nb\") pod \"c6798ce4-621c-42c8-a077-eedf4d68ebce\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.313654 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-svc\") pod \"c6798ce4-621c-42c8-a077-eedf4d68ebce\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.313750 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-config\") pod \"c6798ce4-621c-42c8-a077-eedf4d68ebce\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.313793 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-sb\") pod \"c6798ce4-621c-42c8-a077-eedf4d68ebce\" (UID: \"c6798ce4-621c-42c8-a077-eedf4d68ebce\") " Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.347198 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6798ce4-621c-42c8-a077-eedf4d68ebce-kube-api-access-8gdh2" (OuterVolumeSpecName: "kube-api-access-8gdh2") pod "c6798ce4-621c-42c8-a077-eedf4d68ebce" (UID: "c6798ce4-621c-42c8-a077-eedf4d68ebce"). InnerVolumeSpecName "kube-api-access-8gdh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.364916 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c6798ce4-621c-42c8-a077-eedf4d68ebce" (UID: "c6798ce4-621c-42c8-a077-eedf4d68ebce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.377815 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c6798ce4-621c-42c8-a077-eedf4d68ebce" (UID: "c6798ce4-621c-42c8-a077-eedf4d68ebce"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.385460 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6798ce4-621c-42c8-a077-eedf4d68ebce" (UID: "c6798ce4-621c-42c8-a077-eedf4d68ebce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.391154 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c6798ce4-621c-42c8-a077-eedf4d68ebce" (UID: "c6798ce4-621c-42c8-a077-eedf4d68ebce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.398058 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-config" (OuterVolumeSpecName: "config") pod "c6798ce4-621c-42c8-a077-eedf4d68ebce" (UID: "c6798ce4-621c-42c8-a077-eedf4d68ebce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.416093 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.416140 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.416153 4834 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.416164 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gdh2\" (UniqueName: \"kubernetes.io/projected/c6798ce4-621c-42c8-a077-eedf4d68ebce-kube-api-access-8gdh2\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.416174 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.416182 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6798ce4-621c-42c8-a077-eedf4d68ebce-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.571411 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.652335 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-6d9zc" event={"ID":"c6798ce4-621c-42c8-a077-eedf4d68ebce","Type":"ContainerDied","Data":"6ca0379055211ee5861b982fc2633763e6143d873c4d1c7ed3d5014286ff8579"} Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.652388 4834 scope.go:117] "RemoveContainer" containerID="39a7ff8e19818015c742906e4789b0084ccb954361a0c04f8d9e337250689012" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.652493 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-6d9zc" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.685247 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" event={"ID":"9e3b5fdd-2a9d-4750-8013-081c1b410875","Type":"ContainerStarted","Data":"65f25c3a7c8cf1c54bef769f4b8f659d8cadc67196eeac39e6de922473fadc20"} Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.685895 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.752959 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-6d9zc"] Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.773206 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-6d9zc"] Jan 21 14:51:37 crc kubenswrapper[4834]: I0121 14:51:37.786259 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" podStartSLOduration=3.786229971 podStartE2EDuration="3.786229971s" podCreationTimestamp="2026-01-21 14:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:51:37.763179255 +0000 UTC m=+1243.737528300" watchObservedRunningTime="2026-01-21 14:51:37.786229971 +0000 UTC m=+1243.760579016" Jan 21 14:51:38 crc kubenswrapper[4834]: I0121 14:51:38.341030 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6798ce4-621c-42c8-a077-eedf4d68ebce" path="/var/lib/kubelet/pods/c6798ce4-621c-42c8-a077-eedf4d68ebce/volumes" Jan 21 14:51:38 crc kubenswrapper[4834]: I0121 14:51:38.341579 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f18a1f-904a-4d7e-a843-26aaa8c562c8" path="/var/lib/kubelet/pods/f3f18a1f-904a-4d7e-a843-26aaa8c562c8/volumes" Jan 21 14:51:40 crc kubenswrapper[4834]: I0121 14:51:40.739740 4834 generic.go:334] "Generic (PLEG): container finished" podID="b5abd5d5-addd-4b84-a301-86a55a7e23cf" containerID="7f37d746c1e23773ff5721e4e997cd19227ff7f8c3be0289160cf0c721ed6064" exitCode=0 Jan 21 14:51:40 crc kubenswrapper[4834]: I0121 14:51:40.739838 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rkw29" event={"ID":"b5abd5d5-addd-4b84-a301-86a55a7e23cf","Type":"ContainerDied","Data":"7f37d746c1e23773ff5721e4e997cd19227ff7f8c3be0289160cf0c721ed6064"} Jan 21 14:51:40 crc kubenswrapper[4834]: I0121 14:51:40.744350 4834 generic.go:334] "Generic (PLEG): container finished" podID="582d71b6-2159-468f-90d8-81437c828959" containerID="794676590f991a7ecdd6da4e45f2a58ee67264e8e686dd305a3cf7b02ffbff05" exitCode=0 Jan 21 14:51:40 crc kubenswrapper[4834]: I0121 14:51:40.744375 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mq995" event={"ID":"582d71b6-2159-468f-90d8-81437c828959","Type":"ContainerDied","Data":"794676590f991a7ecdd6da4e45f2a58ee67264e8e686dd305a3cf7b02ffbff05"} Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.770170 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rkw29" event={"ID":"b5abd5d5-addd-4b84-a301-86a55a7e23cf","Type":"ContainerDied","Data":"3350c7bf44345179ff391c5ffc1ba6608bf609a04e4af73a3f8f2d61468c72f5"} Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.770518 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3350c7bf44345179ff391c5ffc1ba6608bf609a04e4af73a3f8f2d61468c72f5" Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.772525 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mq995" event={"ID":"582d71b6-2159-468f-90d8-81437c828959","Type":"ContainerDied","Data":"bf6840290cf948770121585693df34326ebb2e0d08f166111f0ce5afcf947569"} Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.772552 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6840290cf948770121585693df34326ebb2e0d08f166111f0ce5afcf947569" Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.819691 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rkw29" Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.829531 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.986649 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-combined-ca-bundle\") pod \"582d71b6-2159-468f-90d8-81437c828959\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.986792 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-config-data\") pod \"582d71b6-2159-468f-90d8-81437c828959\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.986827 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-db-sync-config-data\") pod \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.986854 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62r85\" (UniqueName: \"kubernetes.io/projected/b5abd5d5-addd-4b84-a301-86a55a7e23cf-kube-api-access-62r85\") pod \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.986878 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-config-data\") pod \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.986921 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-fernet-keys\") pod \"582d71b6-2159-468f-90d8-81437c828959\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.986982 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-scripts\") pod \"582d71b6-2159-468f-90d8-81437c828959\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.987003 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjl25\" (UniqueName: \"kubernetes.io/projected/582d71b6-2159-468f-90d8-81437c828959-kube-api-access-wjl25\") pod \"582d71b6-2159-468f-90d8-81437c828959\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.987056 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-combined-ca-bundle\") pod \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\" (UID: \"b5abd5d5-addd-4b84-a301-86a55a7e23cf\") " Jan 21 14:51:43 crc kubenswrapper[4834]: I0121 14:51:43.987138 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-credential-keys\") pod \"582d71b6-2159-468f-90d8-81437c828959\" (UID: \"582d71b6-2159-468f-90d8-81437c828959\") " Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:43.992976 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-scripts" (OuterVolumeSpecName: "scripts") pod "582d71b6-2159-468f-90d8-81437c828959" (UID: "582d71b6-2159-468f-90d8-81437c828959"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.011104 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "582d71b6-2159-468f-90d8-81437c828959" (UID: "582d71b6-2159-468f-90d8-81437c828959"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.036866 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "582d71b6-2159-468f-90d8-81437c828959" (UID: "582d71b6-2159-468f-90d8-81437c828959"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.053426 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5abd5d5-addd-4b84-a301-86a55a7e23cf" (UID: "b5abd5d5-addd-4b84-a301-86a55a7e23cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.067169 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582d71b6-2159-468f-90d8-81437c828959-kube-api-access-wjl25" (OuterVolumeSpecName: "kube-api-access-wjl25") pod "582d71b6-2159-468f-90d8-81437c828959" (UID: "582d71b6-2159-468f-90d8-81437c828959"). InnerVolumeSpecName "kube-api-access-wjl25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.073392 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b5abd5d5-addd-4b84-a301-86a55a7e23cf" (UID: "b5abd5d5-addd-4b84-a301-86a55a7e23cf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.077462 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-config-data" (OuterVolumeSpecName: "config-data") pod "582d71b6-2159-468f-90d8-81437c828959" (UID: "582d71b6-2159-468f-90d8-81437c828959"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.090819 4834 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.090876 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.090889 4834 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.090901 4834 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.090914 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.090968 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjl25\" (UniqueName: \"kubernetes.io/projected/582d71b6-2159-468f-90d8-81437c828959-kube-api-access-wjl25\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.090981 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.127830 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5abd5d5-addd-4b84-a301-86a55a7e23cf-kube-api-access-62r85" (OuterVolumeSpecName: "kube-api-access-62r85") pod "b5abd5d5-addd-4b84-a301-86a55a7e23cf" (UID: "b5abd5d5-addd-4b84-a301-86a55a7e23cf"). InnerVolumeSpecName "kube-api-access-62r85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.188052 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-config-data" (OuterVolumeSpecName: "config-data") pod "b5abd5d5-addd-4b84-a301-86a55a7e23cf" (UID: "b5abd5d5-addd-4b84-a301-86a55a7e23cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.192407 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62r85\" (UniqueName: \"kubernetes.io/projected/b5abd5d5-addd-4b84-a301-86a55a7e23cf-kube-api-access-62r85\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.192437 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5abd5d5-addd-4b84-a301-86a55a7e23cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.221078 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "582d71b6-2159-468f-90d8-81437c828959" (UID: "582d71b6-2159-468f-90d8-81437c828959"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.294030 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582d71b6-2159-468f-90d8-81437c828959-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.781140 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.784680 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mq995" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.784700 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rkw29" Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.848076 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-7jmh9"] Jan 21 14:51:44 crc kubenswrapper[4834]: I0121 14:51:44.848444 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerName="dnsmasq-dns" containerID="cri-o://d547a8c911f1175d0404c262957df9c6eeae92858f611075b4c9e2c5d067e632" gracePeriod=10 Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.035216 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mq995"] Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.044010 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mq995"] Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.118757 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gvgk7"] Jan 21 14:51:45 crc kubenswrapper[4834]: E0121 14:51:45.127686 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f18a1f-904a-4d7e-a843-26aaa8c562c8" containerName="init" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.127706 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f18a1f-904a-4d7e-a843-26aaa8c562c8" containerName="init" Jan 21 14:51:45 crc kubenswrapper[4834]: E0121 14:51:45.127737 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6798ce4-621c-42c8-a077-eedf4d68ebce" containerName="init" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.127744 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6798ce4-621c-42c8-a077-eedf4d68ebce" containerName="init" Jan 21 14:51:45 crc kubenswrapper[4834]: E0121 14:51:45.127758 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f18a1f-904a-4d7e-a843-26aaa8c562c8" containerName="dnsmasq-dns" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.127765 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f18a1f-904a-4d7e-a843-26aaa8c562c8" containerName="dnsmasq-dns" Jan 21 14:51:45 crc kubenswrapper[4834]: E0121 14:51:45.127772 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582d71b6-2159-468f-90d8-81437c828959" containerName="keystone-bootstrap" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.127779 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="582d71b6-2159-468f-90d8-81437c828959" containerName="keystone-bootstrap" Jan 21 14:51:45 crc kubenswrapper[4834]: E0121 14:51:45.127790 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5abd5d5-addd-4b84-a301-86a55a7e23cf" containerName="glance-db-sync" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.127797 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5abd5d5-addd-4b84-a301-86a55a7e23cf" containerName="glance-db-sync" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.127965 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f18a1f-904a-4d7e-a843-26aaa8c562c8" containerName="dnsmasq-dns" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.127977 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="582d71b6-2159-468f-90d8-81437c828959" containerName="keystone-bootstrap" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.127994 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6798ce4-621c-42c8-a077-eedf4d68ebce" containerName="init" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.128006 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5abd5d5-addd-4b84-a301-86a55a7e23cf" containerName="glance-db-sync" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.128577 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.141759 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gvgk7"] Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.142550 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.142894 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.143019 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5qqrx" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.143153 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.143287 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.233424 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-config-data\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.233554 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-fernet-keys\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.233608 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfszc\" (UniqueName: \"kubernetes.io/projected/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-kube-api-access-cfszc\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.233790 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-scripts\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.233832 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-combined-ca-bundle\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.233895 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-credential-keys\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.360853 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfszc\" (UniqueName: \"kubernetes.io/projected/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-kube-api-access-cfszc\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.361047 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-scripts\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.361093 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-combined-ca-bundle\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.361155 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-credential-keys\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.361451 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-config-data\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.361564 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-fernet-keys\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.381568 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-config-data\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.381974 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-scripts\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.399486 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-credential-keys\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.409776 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfszc\" (UniqueName: \"kubernetes.io/projected/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-kube-api-access-cfszc\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.410614 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-fernet-keys\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.419288 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-combined-ca-bundle\") pod \"keystone-bootstrap-gvgk7\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.478594 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rg9z9"] Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.480775 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.491488 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rg9z9"] Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.561762 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.572060 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plkwr\" (UniqueName: \"kubernetes.io/projected/dd730c74-508a-455b-b4f2-4533c126bf96-kube-api-access-plkwr\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.572198 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.572320 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.572462 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.572550 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-config\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.572615 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.674072 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.674198 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.674252 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-config\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.674302 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.674336 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plkwr\" (UniqueName: \"kubernetes.io/projected/dd730c74-508a-455b-b4f2-4533c126bf96-kube-api-access-plkwr\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.674417 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.678175 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-config\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.678186 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.679625 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.681595 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.682253 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.705641 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plkwr\" (UniqueName: \"kubernetes.io/projected/dd730c74-508a-455b-b4f2-4533c126bf96-kube-api-access-plkwr\") pod \"dnsmasq-dns-6f6f8cb849-rg9z9\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.815490 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.816343 4834 generic.go:334] "Generic (PLEG): container finished" podID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerID="d547a8c911f1175d0404c262957df9c6eeae92858f611075b4c9e2c5d067e632" exitCode=0 Jan 21 14:51:45 crc kubenswrapper[4834]: I0121 14:51:45.816378 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" event={"ID":"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0","Type":"ContainerDied","Data":"d547a8c911f1175d0404c262957df9c6eeae92858f611075b4c9e2c5d067e632"} Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.276225 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.277944 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.280365 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.280726 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z26b8" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.280956 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.303738 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.344373 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582d71b6-2159-468f-90d8-81437c828959" path="/var/lib/kubelet/pods/582d71b6-2159-468f-90d8-81437c828959/volumes" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.411434 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-scripts\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.411814 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.411852 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.411869 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.411887 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-config-data\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.412108 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbhd5\" (UniqueName: \"kubernetes.io/projected/a6d95762-60a1-4c7b-92e3-03ea6ec10637-kube-api-access-wbhd5\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.412278 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-logs\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.514703 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-logs\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.514888 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-scripts\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.515011 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.515068 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.515106 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.515134 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-config-data\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.515238 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbhd5\" (UniqueName: \"kubernetes.io/projected/a6d95762-60a1-4c7b-92e3-03ea6ec10637-kube-api-access-wbhd5\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.515287 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-logs\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.515524 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.516045 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.543804 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.544729 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-config-data\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.545157 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-scripts\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.561897 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbhd5\" (UniqueName: \"kubernetes.io/projected/a6d95762-60a1-4c7b-92e3-03ea6ec10637-kube-api-access-wbhd5\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.595748 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.653987 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.739540 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.741523 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.758624 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.760088 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.861140 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.861240 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.861285 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.861324 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w29vg\" (UniqueName: \"kubernetes.io/projected/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-kube-api-access-w29vg\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.861365 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.861391 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.861416 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.963377 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.963468 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.963516 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w29vg\" (UniqueName: \"kubernetes.io/projected/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-kube-api-access-w29vg\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.963564 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.963595 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.963628 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.963699 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.964229 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.964488 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.964500 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-logs\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.969118 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.972323 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.982703 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:46 crc kubenswrapper[4834]: I0121 14:51:46.993519 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w29vg\" (UniqueName: \"kubernetes.io/projected/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-kube-api-access-w29vg\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:47 crc kubenswrapper[4834]: I0121 14:51:47.014346 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:51:47 crc kubenswrapper[4834]: I0121 14:51:47.093461 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:51:47 crc kubenswrapper[4834]: I0121 14:51:47.352581 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Jan 21 14:51:48 crc kubenswrapper[4834]: I0121 14:51:48.228596 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:51:48 crc kubenswrapper[4834]: I0121 14:51:48.313893 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:51:52 crc kubenswrapper[4834]: I0121 14:51:52.352949 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Jan 21 14:51:57 crc kubenswrapper[4834]: I0121 14:51:57.353502 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Jan 21 14:51:57 crc kubenswrapper[4834]: I0121 14:51:57.354404 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.144704 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:52:00 crc kubenswrapper[4834]: E0121 14:52:00.232121 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 21 14:52:00 crc kubenswrapper[4834]: E0121 14:52:00.232351 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmnkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-5nwpj_openstack(2907abf2-1f9d-497d-bfb3-bf4094e7c174): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:52:00 crc kubenswrapper[4834]: E0121 14:52:00.233622 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-5nwpj" podUID="2907abf2-1f9d-497d-bfb3-bf4094e7c174" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.257539 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnr6f\" (UniqueName: \"kubernetes.io/projected/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-kube-api-access-cnr6f\") pod \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.257683 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-dns-svc\") pod \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.257740 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-config\") pod \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.257867 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-sb\") pod \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.257963 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-nb\") pod \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\" (UID: \"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0\") " Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.268279 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-kube-api-access-cnr6f" (OuterVolumeSpecName: "kube-api-access-cnr6f") pod "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" (UID: "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0"). InnerVolumeSpecName "kube-api-access-cnr6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.306529 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-config" (OuterVolumeSpecName: "config") pod "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" (UID: "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.308951 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" (UID: "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.312214 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" (UID: "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.338656 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" (UID: "5dcd0de6-69da-45fb-8d4b-d4e94e087ec0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.359668 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.359711 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.359726 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnr6f\" (UniqueName: \"kubernetes.io/projected/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-kube-api-access-cnr6f\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.359741 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.359753 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:00 crc kubenswrapper[4834]: E0121 14:52:00.801020 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 21 14:52:00 crc kubenswrapper[4834]: E0121 14:52:00.801248 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n588h5c7hc4hfchd9h676h674h657h5ddh5ffh5fh54dh644h5c4h547h577h67ch8ch7dh57bhf4h549h6dh64fh548h68h54ch54dh577h5d8h579h57dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx2gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(39e7e8b6-f56f-4c87-9f77-9969923cbe27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.963620 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" event={"ID":"5dcd0de6-69da-45fb-8d4b-d4e94e087ec0","Type":"ContainerDied","Data":"9387ec890c9977b7f5099fc5fa3a9061fc4b1fff5a80fce28f0e67e5bd8b60c0"} Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.963678 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-7jmh9" Jan 21 14:52:00 crc kubenswrapper[4834]: I0121 14:52:00.963695 4834 scope.go:117] "RemoveContainer" containerID="d547a8c911f1175d0404c262957df9c6eeae92858f611075b4c9e2c5d067e632" Jan 21 14:52:00 crc kubenswrapper[4834]: E0121 14:52:00.965676 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-5nwpj" podUID="2907abf2-1f9d-497d-bfb3-bf4094e7c174" Jan 21 14:52:01 crc kubenswrapper[4834]: I0121 14:52:01.021611 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-7jmh9"] Jan 21 14:52:01 crc kubenswrapper[4834]: I0121 14:52:01.028941 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-7jmh9"] Jan 21 14:52:01 crc kubenswrapper[4834]: E0121 14:52:01.980745 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 21 14:52:01 crc kubenswrapper[4834]: E0121 14:52:01.980963 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6v6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-x8ctw_openstack(13827989-07c5-4417-9be2-574fbca9ddbb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:52:01 crc kubenswrapper[4834]: E0121 14:52:01.982152 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-x8ctw" podUID="13827989-07c5-4417-9be2-574fbca9ddbb" Jan 21 14:52:02 crc kubenswrapper[4834]: I0121 14:52:02.010126 4834 scope.go:117] "RemoveContainer" containerID="d7cdc666bae62349672c35da02cb6f15df3fee55134ef5d0109b0e0e46dba9f0" Jan 21 14:52:02 crc kubenswrapper[4834]: I0121 14:52:02.337888 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" path="/var/lib/kubelet/pods/5dcd0de6-69da-45fb-8d4b-d4e94e087ec0/volumes" Jan 21 14:52:02 crc kubenswrapper[4834]: I0121 14:52:02.741280 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rg9z9"] Jan 21 14:52:02 crc kubenswrapper[4834]: I0121 14:52:02.751269 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gvgk7"] Jan 21 14:52:02 crc kubenswrapper[4834]: I0121 14:52:02.982682 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-722hs" event={"ID":"3ed348d9-9d38-4546-a839-0930def4c9f3","Type":"ContainerStarted","Data":"019916aa32a8d05882acc74c8e5940afedccdc288cba42e68ac13dd506dc1fa9"} Jan 21 14:52:02 crc kubenswrapper[4834]: E0121 14:52:02.985864 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-x8ctw" podUID="13827989-07c5-4417-9be2-574fbca9ddbb" Jan 21 14:52:03 crc kubenswrapper[4834]: I0121 14:52:03.005340 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-722hs" podStartSLOduration=3.579441319 podStartE2EDuration="30.005315184s" podCreationTimestamp="2026-01-21 14:51:33 +0000 UTC" firstStartedPulling="2026-01-21 14:51:35.485497147 +0000 UTC m=+1241.459846192" lastFinishedPulling="2026-01-21 14:52:01.911371012 +0000 UTC m=+1267.885720057" observedRunningTime="2026-01-21 14:52:02.998676369 +0000 UTC m=+1268.973025414" watchObservedRunningTime="2026-01-21 14:52:03.005315184 +0000 UTC m=+1268.979664229" Jan 21 14:52:03 crc kubenswrapper[4834]: W0121 14:52:03.044868 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0281f2f1_e9cd_408f_93d3_d63d5e9fc9d5.slice/crio-cae3c814ff284e8304a890afa239582596c4cdfe69b50cc59188eb4f64c3c51a WatchSource:0}: Error finding container cae3c814ff284e8304a890afa239582596c4cdfe69b50cc59188eb4f64c3c51a: Status 404 returned error can't find the container with id cae3c814ff284e8304a890afa239582596c4cdfe69b50cc59188eb4f64c3c51a Jan 21 14:52:03 crc kubenswrapper[4834]: W0121 14:52:03.045414 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd730c74_508a_455b_b4f2_4533c126bf96.slice/crio-b323ad3eb03422ac5af5f2b1eb5cf2943eb0cb0be145b3f935d799f490e9ce18 WatchSource:0}: Error finding container b323ad3eb03422ac5af5f2b1eb5cf2943eb0cb0be145b3f935d799f490e9ce18: Status 404 returned error can't find the container with id b323ad3eb03422ac5af5f2b1eb5cf2943eb0cb0be145b3f935d799f490e9ce18 Jan 21 14:52:03 crc kubenswrapper[4834]: I0121 14:52:03.049888 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 14:52:03 crc kubenswrapper[4834]: I0121 14:52:03.059975 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:03 crc kubenswrapper[4834]: W0121 14:52:03.064141 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a7aea7f_5fa9_4568_a7f0_d30dbda1a2e7.slice/crio-834540a90381b496bb7abde7faf58e041ea907c812063033cfd2b24989d077ac WatchSource:0}: Error finding container 834540a90381b496bb7abde7faf58e041ea907c812063033cfd2b24989d077ac: Status 404 returned error can't find the container with id 834540a90381b496bb7abde7faf58e041ea907c812063033cfd2b24989d077ac Jan 21 14:52:03 crc kubenswrapper[4834]: I0121 14:52:03.711173 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:03 crc kubenswrapper[4834]: W0121 14:52:03.719459 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6d95762_60a1_4c7b_92e3_03ea6ec10637.slice/crio-9934f0cc7c34dfcfde2dd8b9666054405020cb5335d310aeea9b77718c201179 WatchSource:0}: Error finding container 9934f0cc7c34dfcfde2dd8b9666054405020cb5335d310aeea9b77718c201179: Status 404 returned error can't find the container with id 9934f0cc7c34dfcfde2dd8b9666054405020cb5335d310aeea9b77718c201179 Jan 21 14:52:04 crc kubenswrapper[4834]: I0121 14:52:04.000058 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" event={"ID":"dd730c74-508a-455b-b4f2-4533c126bf96","Type":"ContainerStarted","Data":"b323ad3eb03422ac5af5f2b1eb5cf2943eb0cb0be145b3f935d799f490e9ce18"} Jan 21 14:52:04 crc kubenswrapper[4834]: I0121 14:52:04.002370 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gvgk7" event={"ID":"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5","Type":"ContainerStarted","Data":"cae3c814ff284e8304a890afa239582596c4cdfe69b50cc59188eb4f64c3c51a"} Jan 21 14:52:04 crc kubenswrapper[4834]: I0121 14:52:04.004030 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7","Type":"ContainerStarted","Data":"834540a90381b496bb7abde7faf58e041ea907c812063033cfd2b24989d077ac"} Jan 21 14:52:04 crc kubenswrapper[4834]: I0121 14:52:04.005486 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d95762-60a1-4c7b-92e3-03ea6ec10637","Type":"ContainerStarted","Data":"9934f0cc7c34dfcfde2dd8b9666054405020cb5335d310aeea9b77718c201179"} Jan 21 14:52:05 crc kubenswrapper[4834]: I0121 14:52:05.028364 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gvgk7" event={"ID":"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5","Type":"ContainerStarted","Data":"faf65ed7d70df2b0daa57f403ad209c034d9c6c11576362ea79a3947583a86aa"} Jan 21 14:52:05 crc kubenswrapper[4834]: I0121 14:52:05.034951 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e7e8b6-f56f-4c87-9f77-9969923cbe27","Type":"ContainerStarted","Data":"5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2"} Jan 21 14:52:05 crc kubenswrapper[4834]: I0121 14:52:05.037505 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7","Type":"ContainerStarted","Data":"f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd"} Jan 21 14:52:05 crc kubenswrapper[4834]: I0121 14:52:05.039545 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d95762-60a1-4c7b-92e3-03ea6ec10637","Type":"ContainerStarted","Data":"e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634"} Jan 21 14:52:05 crc kubenswrapper[4834]: I0121 14:52:05.041654 4834 generic.go:334] "Generic (PLEG): container finished" podID="dd730c74-508a-455b-b4f2-4533c126bf96" containerID="a33ded5cfaf8c189abcd7f8aed73029a4a13ca9d9df07bab8cbd4ec86ae598cb" exitCode=0 Jan 21 14:52:05 crc kubenswrapper[4834]: I0121 14:52:05.041693 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" event={"ID":"dd730c74-508a-455b-b4f2-4533c126bf96","Type":"ContainerDied","Data":"a33ded5cfaf8c189abcd7f8aed73029a4a13ca9d9df07bab8cbd4ec86ae598cb"} Jan 21 14:52:05 crc kubenswrapper[4834]: I0121 14:52:05.082055 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gvgk7" podStartSLOduration=20.082020373 podStartE2EDuration="20.082020373s" podCreationTimestamp="2026-01-21 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:05.054396584 +0000 UTC m=+1271.028745629" watchObservedRunningTime="2026-01-21 14:52:05.082020373 +0000 UTC m=+1271.056369418" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.062174 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d95762-60a1-4c7b-92e3-03ea6ec10637","Type":"ContainerStarted","Data":"458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e"} Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.062393 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerName="glance-httpd" containerID="cri-o://458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e" gracePeriod=30 Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.062393 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerName="glance-log" containerID="cri-o://e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634" gracePeriod=30 Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.065054 4834 generic.go:334] "Generic (PLEG): container finished" podID="3ed348d9-9d38-4546-a839-0930def4c9f3" containerID="019916aa32a8d05882acc74c8e5940afedccdc288cba42e68ac13dd506dc1fa9" exitCode=0 Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.065143 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-722hs" event={"ID":"3ed348d9-9d38-4546-a839-0930def4c9f3","Type":"ContainerDied","Data":"019916aa32a8d05882acc74c8e5940afedccdc288cba42e68ac13dd506dc1fa9"} Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.071148 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" event={"ID":"dd730c74-508a-455b-b4f2-4533c126bf96","Type":"ContainerStarted","Data":"238e9220c679c8123c7b1326da2371187ff7c09c525c7f859da1fe9356a819b6"} Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.071282 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.075364 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7","Type":"ContainerStarted","Data":"1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb"} Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.075448 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerName="glance-log" containerID="cri-o://f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd" gracePeriod=30 Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.075460 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerName="glance-httpd" containerID="cri-o://1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb" gracePeriod=30 Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.096292 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.096271479 podStartE2EDuration="21.096271479s" podCreationTimestamp="2026-01-21 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:06.086737343 +0000 UTC m=+1272.061086388" watchObservedRunningTime="2026-01-21 14:52:06.096271479 +0000 UTC m=+1272.070620524" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.119990 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.119966166 podStartE2EDuration="21.119966166s" podCreationTimestamp="2026-01-21 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:06.109214382 +0000 UTC m=+1272.083563427" watchObservedRunningTime="2026-01-21 14:52:06.119966166 +0000 UTC m=+1272.094315221" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.163445 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" podStartSLOduration=21.163426727 podStartE2EDuration="21.163426727s" podCreationTimestamp="2026-01-21 14:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:06.156389988 +0000 UTC m=+1272.130739033" watchObservedRunningTime="2026-01-21 14:52:06.163426727 +0000 UTC m=+1272.137775772" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.810223 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.920897 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.920579 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-combined-ca-bundle\") pod \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.921495 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w29vg\" (UniqueName: \"kubernetes.io/projected/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-kube-api-access-w29vg\") pod \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.921538 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-config-data\") pod \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.921602 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-scripts\") pod \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.921685 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.921711 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-logs\") pod \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.921735 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-httpd-run\") pod \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\" (UID: \"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7\") " Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.922494 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" (UID: "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.928329 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-logs" (OuterVolumeSpecName: "logs") pod "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" (UID: "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.930908 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-scripts" (OuterVolumeSpecName: "scripts") pod "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" (UID: "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.931205 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-kube-api-access-w29vg" (OuterVolumeSpecName: "kube-api-access-w29vg") pod "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" (UID: "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7"). InnerVolumeSpecName "kube-api-access-w29vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.932702 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" (UID: "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.971052 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" (UID: "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:06 crc kubenswrapper[4834]: I0121 14:52:06.990228 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-config-data" (OuterVolumeSpecName: "config-data") pod "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" (UID: "8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.023722 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbhd5\" (UniqueName: \"kubernetes.io/projected/a6d95762-60a1-4c7b-92e3-03ea6ec10637-kube-api-access-wbhd5\") pod \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.023820 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-combined-ca-bundle\") pod \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.023876 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024008 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-config-data\") pod \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024266 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-scripts\") pod \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024325 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-httpd-run\") pod \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024353 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-logs\") pod \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\" (UID: \"a6d95762-60a1-4c7b-92e3-03ea6ec10637\") " Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024779 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024803 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024780 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a6d95762-60a1-4c7b-92e3-03ea6ec10637" (UID: "a6d95762-60a1-4c7b-92e3-03ea6ec10637"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024820 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w29vg\" (UniqueName: \"kubernetes.io/projected/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-kube-api-access-w29vg\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024867 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-logs" (OuterVolumeSpecName: "logs") pod "a6d95762-60a1-4c7b-92e3-03ea6ec10637" (UID: "a6d95762-60a1-4c7b-92e3-03ea6ec10637"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024886 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024945 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024973 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.024988 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.029232 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "a6d95762-60a1-4c7b-92e3-03ea6ec10637" (UID: "a6d95762-60a1-4c7b-92e3-03ea6ec10637"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.030688 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-scripts" (OuterVolumeSpecName: "scripts") pod "a6d95762-60a1-4c7b-92e3-03ea6ec10637" (UID: "a6d95762-60a1-4c7b-92e3-03ea6ec10637"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.032230 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d95762-60a1-4c7b-92e3-03ea6ec10637-kube-api-access-wbhd5" (OuterVolumeSpecName: "kube-api-access-wbhd5") pod "a6d95762-60a1-4c7b-92e3-03ea6ec10637" (UID: "a6d95762-60a1-4c7b-92e3-03ea6ec10637"). InnerVolumeSpecName "kube-api-access-wbhd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.052009 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.059736 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6d95762-60a1-4c7b-92e3-03ea6ec10637" (UID: "a6d95762-60a1-4c7b-92e3-03ea6ec10637"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.072750 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-config-data" (OuterVolumeSpecName: "config-data") pod "a6d95762-60a1-4c7b-92e3-03ea6ec10637" (UID: "a6d95762-60a1-4c7b-92e3-03ea6ec10637"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.085731 4834 generic.go:334] "Generic (PLEG): container finished" podID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerID="1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb" exitCode=0 Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.085765 4834 generic.go:334] "Generic (PLEG): container finished" podID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerID="f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd" exitCode=143 Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.085802 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7","Type":"ContainerDied","Data":"1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb"} Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.085830 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7","Type":"ContainerDied","Data":"f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd"} Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.085842 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7","Type":"ContainerDied","Data":"834540a90381b496bb7abde7faf58e041ea907c812063033cfd2b24989d077ac"} Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.085858 4834 scope.go:117] "RemoveContainer" containerID="1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.086010 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.097704 4834 generic.go:334] "Generic (PLEG): container finished" podID="8becb166-563c-43ec-8d07-567f51c39d64" containerID="9e0eb4b587e1c4a06b4592384cbe89012e12116150f42a3c1da82d69989a8bf5" exitCode=0 Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.097773 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bmr8w" event={"ID":"8becb166-563c-43ec-8d07-567f51c39d64","Type":"ContainerDied","Data":"9e0eb4b587e1c4a06b4592384cbe89012e12116150f42a3c1da82d69989a8bf5"} Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.102154 4834 generic.go:334] "Generic (PLEG): container finished" podID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerID="458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e" exitCode=0 Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.102196 4834 generic.go:334] "Generic (PLEG): container finished" podID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerID="e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634" exitCode=143 Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.102264 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d95762-60a1-4c7b-92e3-03ea6ec10637","Type":"ContainerDied","Data":"458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e"} Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.102291 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d95762-60a1-4c7b-92e3-03ea6ec10637","Type":"ContainerDied","Data":"e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634"} Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.102306 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a6d95762-60a1-4c7b-92e3-03ea6ec10637","Type":"ContainerDied","Data":"9934f0cc7c34dfcfde2dd8b9666054405020cb5335d310aeea9b77718c201179"} Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.102463 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.126520 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.126566 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.126576 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.126587 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.126597 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6d95762-60a1-4c7b-92e3-03ea6ec10637-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.126606 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.126615 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6d95762-60a1-4c7b-92e3-03ea6ec10637-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.126623 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbhd5\" (UniqueName: \"kubernetes.io/projected/a6d95762-60a1-4c7b-92e3-03ea6ec10637-kube-api-access-wbhd5\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.160897 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.162668 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.184006 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.205081 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.226563 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.228979 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.244454 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:07 crc kubenswrapper[4834]: E0121 14:52:07.245121 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerName="init" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245172 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerName="init" Jan 21 14:52:07 crc kubenswrapper[4834]: E0121 14:52:07.245202 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerName="dnsmasq-dns" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245208 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerName="dnsmasq-dns" Jan 21 14:52:07 crc kubenswrapper[4834]: E0121 14:52:07.245231 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerName="glance-log" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245242 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerName="glance-log" Jan 21 14:52:07 crc kubenswrapper[4834]: E0121 14:52:07.245252 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerName="glance-httpd" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245258 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerName="glance-httpd" Jan 21 14:52:07 crc kubenswrapper[4834]: E0121 14:52:07.245272 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerName="glance-httpd" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245279 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerName="glance-httpd" Jan 21 14:52:07 crc kubenswrapper[4834]: E0121 14:52:07.245286 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerName="glance-log" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245292 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerName="glance-log" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245495 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerName="glance-httpd" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245579 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" containerName="glance-log" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245603 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerName="glance-log" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245619 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dcd0de6-69da-45fb-8d4b-d4e94e087ec0" containerName="dnsmasq-dns" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.245631 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" containerName="glance-httpd" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.247017 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.250787 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z26b8" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.250867 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.251596 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.251654 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.255282 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.256881 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.259469 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.259659 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.272649 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.285769 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332021 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332095 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332113 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-scripts\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332135 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-logs\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332158 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332183 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332199 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332227 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332243 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-config-data\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332282 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd7g2\" (UniqueName: \"kubernetes.io/projected/67626626-3343-484b-9c1c-6d7bee71821f-kube-api-access-pd7g2\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332305 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332350 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332388 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332410 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332431 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l478h\" (UniqueName: \"kubernetes.io/projected/dc2d5700-1644-4504-aae6-8bcf6c87363f-kube-api-access-l478h\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.332452 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-logs\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.433912 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.433976 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l478h\" (UniqueName: \"kubernetes.io/projected/dc2d5700-1644-4504-aae6-8bcf6c87363f-kube-api-access-l478h\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434010 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-logs\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434037 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434077 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434095 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-scripts\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434119 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-logs\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434142 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434165 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434179 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434217 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434238 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-config-data\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434297 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd7g2\" (UniqueName: \"kubernetes.io/projected/67626626-3343-484b-9c1c-6d7bee71821f-kube-api-access-pd7g2\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434358 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434882 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.434973 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.435256 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.440000 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.440155 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.440233 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-logs\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.440526 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.440767 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-logs\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.447294 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.451911 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-config-data\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.453032 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.455175 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.455219 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.462841 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-scripts\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.463064 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd7g2\" (UniqueName: \"kubernetes.io/projected/67626626-3343-484b-9c1c-6d7bee71821f-kube-api-access-pd7g2\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.464738 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.464764 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.475588 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " pod="openstack/glance-default-external-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.480754 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l478h\" (UniqueName: \"kubernetes.io/projected/dc2d5700-1644-4504-aae6-8bcf6c87363f-kube-api-access-l478h\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.485206 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.579419 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:07 crc kubenswrapper[4834]: I0121 14:52:07.590350 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:52:08 crc kubenswrapper[4834]: I0121 14:52:08.112880 4834 generic.go:334] "Generic (PLEG): container finished" podID="0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" containerID="faf65ed7d70df2b0daa57f403ad209c034d9c6c11576362ea79a3947583a86aa" exitCode=0 Jan 21 14:52:08 crc kubenswrapper[4834]: I0121 14:52:08.113063 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gvgk7" event={"ID":"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5","Type":"ContainerDied","Data":"faf65ed7d70df2b0daa57f403ad209c034d9c6c11576362ea79a3947583a86aa"} Jan 21 14:52:08 crc kubenswrapper[4834]: I0121 14:52:08.499848 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7" path="/var/lib/kubelet/pods/8a7aea7f-5fa9-4568-a7f0-d30dbda1a2e7/volumes" Jan 21 14:52:08 crc kubenswrapper[4834]: I0121 14:52:08.501273 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d95762-60a1-4c7b-92e3-03ea6ec10637" path="/var/lib/kubelet/pods/a6d95762-60a1-4c7b-92e3-03ea6ec10637/volumes" Jan 21 14:52:10 crc kubenswrapper[4834]: I0121 14:52:10.818065 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:52:10 crc kubenswrapper[4834]: I0121 14:52:10.889180 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj"] Jan 21 14:52:10 crc kubenswrapper[4834]: I0121 14:52:10.889468 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" podUID="9e3b5fdd-2a9d-4750-8013-081c1b410875" containerName="dnsmasq-dns" containerID="cri-o://65f25c3a7c8cf1c54bef769f4b8f659d8cadc67196eeac39e6de922473fadc20" gracePeriod=10 Jan 21 14:52:11 crc kubenswrapper[4834]: I0121 14:52:11.143164 4834 generic.go:334] "Generic (PLEG): container finished" podID="9e3b5fdd-2a9d-4750-8013-081c1b410875" containerID="65f25c3a7c8cf1c54bef769f4b8f659d8cadc67196eeac39e6de922473fadc20" exitCode=0 Jan 21 14:52:11 crc kubenswrapper[4834]: I0121 14:52:11.143356 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" event={"ID":"9e3b5fdd-2a9d-4750-8013-081c1b410875","Type":"ContainerDied","Data":"65f25c3a7c8cf1c54bef769f4b8f659d8cadc67196eeac39e6de922473fadc20"} Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.668075 4834 scope.go:117] "RemoveContainer" containerID="f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.803456 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.810158 4834 scope.go:117] "RemoveContainer" containerID="1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb" Jan 21 14:52:13 crc kubenswrapper[4834]: E0121 14:52:13.814093 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb\": container with ID starting with 1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb not found: ID does not exist" containerID="1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.814159 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb"} err="failed to get container status \"1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb\": rpc error: code = NotFound desc = could not find container \"1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb\": container with ID starting with 1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb not found: ID does not exist" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.814197 4834 scope.go:117] "RemoveContainer" containerID="f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd" Jan 21 14:52:13 crc kubenswrapper[4834]: E0121 14:52:13.815260 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd\": container with ID starting with f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd not found: ID does not exist" containerID="f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.815308 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd"} err="failed to get container status \"f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd\": rpc error: code = NotFound desc = could not find container \"f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd\": container with ID starting with f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd not found: ID does not exist" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.815331 4834 scope.go:117] "RemoveContainer" containerID="1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.815666 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb"} err="failed to get container status \"1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb\": rpc error: code = NotFound desc = could not find container \"1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb\": container with ID starting with 1487f4b291d167beded623930fe9877f8550eac4d72cb1f1331dc9e96a11d1fb not found: ID does not exist" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.815698 4834 scope.go:117] "RemoveContainer" containerID="f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.816495 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd"} err="failed to get container status \"f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd\": rpc error: code = NotFound desc = could not find container \"f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd\": container with ID starting with f7757e59703f6a5f825e71bf7e812d868feebeda0ad19b88c6175fb6e34c81cd not found: ID does not exist" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.816624 4834 scope.go:117] "RemoveContainer" containerID="458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.817122 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.848211 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-722hs" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.912850 4834 scope.go:117] "RemoveContainer" containerID="e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.913888 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-scripts\") pod \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914054 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-credential-keys\") pod \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914115 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-scripts\") pod \"3ed348d9-9d38-4546-a839-0930def4c9f3\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914196 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwtnh\" (UniqueName: \"kubernetes.io/projected/3ed348d9-9d38-4546-a839-0930def4c9f3-kube-api-access-vwtnh\") pod \"3ed348d9-9d38-4546-a839-0930def4c9f3\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914264 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed348d9-9d38-4546-a839-0930def4c9f3-logs\") pod \"3ed348d9-9d38-4546-a839-0930def4c9f3\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914404 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-combined-ca-bundle\") pod \"3ed348d9-9d38-4546-a839-0930def4c9f3\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914453 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-config\") pod \"8becb166-563c-43ec-8d07-567f51c39d64\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914483 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-config-data\") pod \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914561 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z9lx\" (UniqueName: \"kubernetes.io/projected/8becb166-563c-43ec-8d07-567f51c39d64-kube-api-access-7z9lx\") pod \"8becb166-563c-43ec-8d07-567f51c39d64\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914599 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfszc\" (UniqueName: \"kubernetes.io/projected/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-kube-api-access-cfszc\") pod \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914636 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-fernet-keys\") pod \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914696 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-combined-ca-bundle\") pod \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\" (UID: \"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914774 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-combined-ca-bundle\") pod \"8becb166-563c-43ec-8d07-567f51c39d64\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.914811 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-config-data\") pod \"3ed348d9-9d38-4546-a839-0930def4c9f3\" (UID: \"3ed348d9-9d38-4546-a839-0930def4c9f3\") " Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.928047 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ed348d9-9d38-4546-a839-0930def4c9f3-logs" (OuterVolumeSpecName: "logs") pod "3ed348d9-9d38-4546-a839-0930def4c9f3" (UID: "3ed348d9-9d38-4546-a839-0930def4c9f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.929005 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-kube-api-access-cfszc" (OuterVolumeSpecName: "kube-api-access-cfszc") pod "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" (UID: "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5"). InnerVolumeSpecName "kube-api-access-cfszc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.933455 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-scripts" (OuterVolumeSpecName: "scripts") pod "3ed348d9-9d38-4546-a839-0930def4c9f3" (UID: "3ed348d9-9d38-4546-a839-0930def4c9f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.933833 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" (UID: "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.935719 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" (UID: "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.951162 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-scripts" (OuterVolumeSpecName: "scripts") pod "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" (UID: "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.970505 4834 scope.go:117] "RemoveContainer" containerID="458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e" Jan 21 14:52:13 crc kubenswrapper[4834]: E0121 14:52:13.971256 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e\": container with ID starting with 458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e not found: ID does not exist" containerID="458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.971324 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e"} err="failed to get container status \"458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e\": rpc error: code = NotFound desc = could not find container \"458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e\": container with ID starting with 458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e not found: ID does not exist" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.971362 4834 scope.go:117] "RemoveContainer" containerID="e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634" Jan 21 14:52:13 crc kubenswrapper[4834]: E0121 14:52:13.971839 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634\": container with ID starting with e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634 not found: ID does not exist" containerID="e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.971867 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634"} err="failed to get container status \"e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634\": rpc error: code = NotFound desc = could not find container \"e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634\": container with ID starting with e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634 not found: ID does not exist" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.971886 4834 scope.go:117] "RemoveContainer" containerID="458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.972189 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e"} err="failed to get container status \"458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e\": rpc error: code = NotFound desc = could not find container \"458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e\": container with ID starting with 458155463b8adcf2caf0ed1687352c943e522bac4c232921c74e1934b8208a2e not found: ID does not exist" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.972210 4834 scope.go:117] "RemoveContainer" containerID="e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.972703 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634"} err="failed to get container status \"e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634\": rpc error: code = NotFound desc = could not find container \"e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634\": container with ID starting with e8011ecd0dfaecc0c12f6dfd55d2440d09d73fc551de83288d2aae4ca2926634 not found: ID does not exist" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.978006 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.979834 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8becb166-563c-43ec-8d07-567f51c39d64-kube-api-access-7z9lx" (OuterVolumeSpecName: "kube-api-access-7z9lx") pod "8becb166-563c-43ec-8d07-567f51c39d64" (UID: "8becb166-563c-43ec-8d07-567f51c39d64"). InnerVolumeSpecName "kube-api-access-7z9lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:13 crc kubenswrapper[4834]: I0121 14:52:13.980260 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed348d9-9d38-4546-a839-0930def4c9f3-kube-api-access-vwtnh" (OuterVolumeSpecName: "kube-api-access-vwtnh") pod "3ed348d9-9d38-4546-a839-0930def4c9f3" (UID: "3ed348d9-9d38-4546-a839-0930def4c9f3"). InnerVolumeSpecName "kube-api-access-vwtnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.016716 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8becb166-563c-43ec-8d07-567f51c39d64" (UID: "8becb166-563c-43ec-8d07-567f51c39d64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.017050 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-combined-ca-bundle\") pod \"8becb166-563c-43ec-8d07-567f51c39d64\" (UID: \"8becb166-563c-43ec-8d07-567f51c39d64\") " Jan 21 14:52:14 crc kubenswrapper[4834]: W0121 14:52:14.018142 4834 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8becb166-563c-43ec-8d07-567f51c39d64/volumes/kubernetes.io~secret/combined-ca-bundle Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.018167 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8becb166-563c-43ec-8d07-567f51c39d64" (UID: "8becb166-563c-43ec-8d07-567f51c39d64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.018942 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z9lx\" (UniqueName: \"kubernetes.io/projected/8becb166-563c-43ec-8d07-567f51c39d64-kube-api-access-7z9lx\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.018968 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfszc\" (UniqueName: \"kubernetes.io/projected/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-kube-api-access-cfszc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.018981 4834 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.018996 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.019008 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.019021 4834 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.019035 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.019048 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwtnh\" (UniqueName: \"kubernetes.io/projected/3ed348d9-9d38-4546-a839-0930def4c9f3-kube-api-access-vwtnh\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.019062 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ed348d9-9d38-4546-a839-0930def4c9f3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.020108 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-config-data" (OuterVolumeSpecName: "config-data") pod "3ed348d9-9d38-4546-a839-0930def4c9f3" (UID: "3ed348d9-9d38-4546-a839-0930def4c9f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.020250 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ed348d9-9d38-4546-a839-0930def4c9f3" (UID: "3ed348d9-9d38-4546-a839-0930def4c9f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.030888 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" (UID: "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.036867 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-config-data" (OuterVolumeSpecName: "config-data") pod "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" (UID: "0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.038547 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-config" (OuterVolumeSpecName: "config") pod "8becb166-563c-43ec-8d07-567f51c39d64" (UID: "8becb166-563c-43ec-8d07-567f51c39d64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.120418 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-config\") pod \"9e3b5fdd-2a9d-4750-8013-081c1b410875\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.120493 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-swift-storage-0\") pod \"9e3b5fdd-2a9d-4750-8013-081c1b410875\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.120521 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt564\" (UniqueName: \"kubernetes.io/projected/9e3b5fdd-2a9d-4750-8013-081c1b410875-kube-api-access-kt564\") pod \"9e3b5fdd-2a9d-4750-8013-081c1b410875\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.120580 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-nb\") pod \"9e3b5fdd-2a9d-4750-8013-081c1b410875\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.120722 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-sb\") pod \"9e3b5fdd-2a9d-4750-8013-081c1b410875\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.120875 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-svc\") pod \"9e3b5fdd-2a9d-4750-8013-081c1b410875\" (UID: \"9e3b5fdd-2a9d-4750-8013-081c1b410875\") " Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.123517 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.123560 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8becb166-563c-43ec-8d07-567f51c39d64-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.123576 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.123590 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.123604 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed348d9-9d38-4546-a839-0930def4c9f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.126216 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3b5fdd-2a9d-4750-8013-081c1b410875-kube-api-access-kt564" (OuterVolumeSpecName: "kube-api-access-kt564") pod "9e3b5fdd-2a9d-4750-8013-081c1b410875" (UID: "9e3b5fdd-2a9d-4750-8013-081c1b410875"). InnerVolumeSpecName "kube-api-access-kt564". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.171850 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e3b5fdd-2a9d-4750-8013-081c1b410875" (UID: "9e3b5fdd-2a9d-4750-8013-081c1b410875"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.188809 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9e3b5fdd-2a9d-4750-8013-081c1b410875" (UID: "9e3b5fdd-2a9d-4750-8013-081c1b410875"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.190850 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-722hs" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.190850 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-722hs" event={"ID":"3ed348d9-9d38-4546-a839-0930def4c9f3","Type":"ContainerDied","Data":"d539b91e41c282b87553734e3236b8dbb75918e52e4606ce8329299001cea2b7"} Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.191070 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d539b91e41c282b87553734e3236b8dbb75918e52e4606ce8329299001cea2b7" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.194679 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gvgk7" event={"ID":"0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5","Type":"ContainerDied","Data":"cae3c814ff284e8304a890afa239582596c4cdfe69b50cc59188eb4f64c3c51a"} Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.194908 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cae3c814ff284e8304a890afa239582596c4cdfe69b50cc59188eb4f64c3c51a" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.194716 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gvgk7" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.196542 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e3b5fdd-2a9d-4750-8013-081c1b410875" (UID: "9e3b5fdd-2a9d-4750-8013-081c1b410875"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.198044 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e7e8b6-f56f-4c87-9f77-9969923cbe27","Type":"ContainerStarted","Data":"89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72"} Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.200354 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-config" (OuterVolumeSpecName: "config") pod "9e3b5fdd-2a9d-4750-8013-081c1b410875" (UID: "9e3b5fdd-2a9d-4750-8013-081c1b410875"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.203230 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bmr8w" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.203262 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bmr8w" event={"ID":"8becb166-563c-43ec-8d07-567f51c39d64","Type":"ContainerDied","Data":"4355a7d849025e80b90b10daf4cfd7d0ba41004d487563d428e5b471024087d0"} Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.203294 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4355a7d849025e80b90b10daf4cfd7d0ba41004d487563d428e5b471024087d0" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.208396 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" event={"ID":"9e3b5fdd-2a9d-4750-8013-081c1b410875","Type":"ContainerDied","Data":"4d00fc20aad50c429ac53b788234505bd26517b08a697dab99a462178873d87b"} Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.208439 4834 scope.go:117] "RemoveContainer" containerID="65f25c3a7c8cf1c54bef769f4b8f659d8cadc67196eeac39e6de922473fadc20" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.208561 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.214943 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e3b5fdd-2a9d-4750-8013-081c1b410875" (UID: "9e3b5fdd-2a9d-4750-8013-081c1b410875"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.225549 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.225575 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.225586 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.225596 4834 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.225607 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt564\" (UniqueName: \"kubernetes.io/projected/9e3b5fdd-2a9d-4750-8013-081c1b410875-kube-api-access-kt564\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.225617 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e3b5fdd-2a9d-4750-8013-081c1b410875-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.244980 4834 scope.go:117] "RemoveContainer" containerID="68aacc6fdae5085b9f828a4853e9a747b1a92443ce7e365653aa84acf9fef628" Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.384861 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.496961 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:52:14 crc kubenswrapper[4834]: W0121 14:52:14.498331 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67626626_3343_484b_9c1c_6d7bee71821f.slice/crio-c16803ecddc8342918bf18bb9c84202b6504712897fa1d3d451fd329052f1e62 WatchSource:0}: Error finding container c16803ecddc8342918bf18bb9c84202b6504712897fa1d3d451fd329052f1e62: Status 404 returned error can't find the container with id c16803ecddc8342918bf18bb9c84202b6504712897fa1d3d451fd329052f1e62 Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.544700 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj"] Jan 21 14:52:14 crc kubenswrapper[4834]: I0121 14:52:14.553210 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fc6d4ffc7-dmgsj"] Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.077512 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-577b97bbf9-tdqtw"] Jan 21 14:52:15 crc kubenswrapper[4834]: E0121 14:52:15.079100 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" containerName="keystone-bootstrap" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.079124 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" containerName="keystone-bootstrap" Jan 21 14:52:15 crc kubenswrapper[4834]: E0121 14:52:15.079139 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8becb166-563c-43ec-8d07-567f51c39d64" containerName="neutron-db-sync" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.079146 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8becb166-563c-43ec-8d07-567f51c39d64" containerName="neutron-db-sync" Jan 21 14:52:15 crc kubenswrapper[4834]: E0121 14:52:15.079174 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3b5fdd-2a9d-4750-8013-081c1b410875" containerName="init" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.079179 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3b5fdd-2a9d-4750-8013-081c1b410875" containerName="init" Jan 21 14:52:15 crc kubenswrapper[4834]: E0121 14:52:15.079192 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3b5fdd-2a9d-4750-8013-081c1b410875" containerName="dnsmasq-dns" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.079199 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3b5fdd-2a9d-4750-8013-081c1b410875" containerName="dnsmasq-dns" Jan 21 14:52:15 crc kubenswrapper[4834]: E0121 14:52:15.079216 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed348d9-9d38-4546-a839-0930def4c9f3" containerName="placement-db-sync" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.079222 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed348d9-9d38-4546-a839-0930def4c9f3" containerName="placement-db-sync" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.079397 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" containerName="keystone-bootstrap" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.079410 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed348d9-9d38-4546-a839-0930def4c9f3" containerName="placement-db-sync" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.079420 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8becb166-563c-43ec-8d07-567f51c39d64" containerName="neutron-db-sync" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.079430 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3b5fdd-2a9d-4750-8013-081c1b410875" containerName="dnsmasq-dns" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.081955 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.084867 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-577b97bbf9-tdqtw"] Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.089017 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.089244 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.092037 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5qqrx" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.096241 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.096658 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.114066 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.169115 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-fernet-keys\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.169412 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-scripts\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.169446 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-credential-keys\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.169468 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-combined-ca-bundle\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.169510 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n64sg\" (UniqueName: \"kubernetes.io/projected/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-kube-api-access-n64sg\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.169530 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-config-data\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.179548 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-internal-tls-certs\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.179842 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-public-tls-certs\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.215144 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-69bb684bc8-6s7qv"] Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.217355 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.237415 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.237692 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.237757 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.238013 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.255142 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cxmjg" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.269125 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685444497c-qth6j"] Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.271610 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.283258 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-internal-tls-certs\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.283304 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-public-tls-certs\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.299462 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ef0752-abe1-465f-8b0b-77906b861c12-logs\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.299584 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k9kv\" (UniqueName: \"kubernetes.io/projected/46ef0752-abe1-465f-8b0b-77906b861c12-kube-api-access-7k9kv\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.299650 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-scripts\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.299735 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-credential-keys\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.299782 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-combined-ca-bundle\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.299897 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n64sg\" (UniqueName: \"kubernetes.io/projected/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-kube-api-access-n64sg\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.300002 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-config-data\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.300043 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-internal-tls-certs\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.300139 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-combined-ca-bundle\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.300186 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-public-tls-certs\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.300280 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-fernet-keys\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.300322 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-config-data\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.300348 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-scripts\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.304873 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69bb684bc8-6s7qv"] Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.312123 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-internal-tls-certs\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.314859 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-credential-keys\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.331148 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-qth6j"] Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.343077 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-config-data\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.344272 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-fernet-keys\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.345824 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-public-tls-certs\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.349851 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-combined-ca-bundle\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.357637 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc2d5700-1644-4504-aae6-8bcf6c87363f","Type":"ContainerStarted","Data":"8044822a47f6b5731478b83713a9a93f4edbf333fa544709065dc92fa0016322"} Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.381117 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f9dd849bb-b4xzs"] Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.384509 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.396432 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.397876 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-scripts\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.398714 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n64sg\" (UniqueName: \"kubernetes.io/projected/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-kube-api-access-n64sg\") pod \"keystone-577b97bbf9-tdqtw\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.402296 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f9dd849bb-b4xzs"] Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403588 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9n99\" (UniqueName: \"kubernetes.io/projected/757e81d8-6937-4a5a-8ebb-8af24d465dc7-kube-api-access-x9n99\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403653 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-combined-ca-bundle\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403706 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-config\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403726 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403753 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403774 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403814 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-config-data\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403833 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-scripts\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403865 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-internal-tls-certs\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403886 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-public-tls-certs\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403918 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-svc\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403969 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ef0752-abe1-465f-8b0b-77906b861c12-logs\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.403998 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k9kv\" (UniqueName: \"kubernetes.io/projected/46ef0752-abe1-465f-8b0b-77906b861c12-kube-api-access-7k9kv\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.404362 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.406404 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-86c57" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.406446 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.407052 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ef0752-abe1-465f-8b0b-77906b861c12-logs\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.422647 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-internal-tls-certs\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.423291 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.424453 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67626626-3343-484b-9c1c-6d7bee71821f","Type":"ContainerStarted","Data":"c16803ecddc8342918bf18bb9c84202b6504712897fa1d3d451fd329052f1e62"} Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.438799 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-public-tls-certs\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.446370 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-scripts\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.450619 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-config-data\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.457953 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-combined-ca-bundle\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.471843 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k9kv\" (UniqueName: \"kubernetes.io/projected/46ef0752-abe1-465f-8b0b-77906b861c12-kube-api-access-7k9kv\") pod \"placement-69bb684bc8-6s7qv\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.512457 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-httpd-config\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.512722 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-config\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.512807 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9n99\" (UniqueName: \"kubernetes.io/projected/757e81d8-6937-4a5a-8ebb-8af24d465dc7-kube-api-access-x9n99\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.512959 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-ovndb-tls-certs\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.513063 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.513137 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-config\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.513211 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9z6d\" (UniqueName: \"kubernetes.io/projected/b8066f26-fd07-4d6c-bd1b-44664f2a091b-kube-api-access-d9z6d\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.513291 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.513365 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.513460 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-combined-ca-bundle\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.513565 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-svc\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.515108 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-svc\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.517868 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.518744 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-config\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.518979 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.519070 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.537678 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9n99\" (UniqueName: \"kubernetes.io/projected/757e81d8-6937-4a5a-8ebb-8af24d465dc7-kube-api-access-x9n99\") pod \"dnsmasq-dns-685444497c-qth6j\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.580225 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.616407 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-combined-ca-bundle\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.616523 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-httpd-config\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.616581 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-config\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.616622 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-ovndb-tls-certs\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.616651 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9z6d\" (UniqueName: \"kubernetes.io/projected/b8066f26-fd07-4d6c-bd1b-44664f2a091b-kube-api-access-d9z6d\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.620852 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.629147 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-ovndb-tls-certs\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.635731 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-config\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.655287 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-combined-ca-bundle\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.669450 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-httpd-config\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.670549 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9z6d\" (UniqueName: \"kubernetes.io/projected/b8066f26-fd07-4d6c-bd1b-44664f2a091b-kube-api-access-d9z6d\") pod \"neutron-5f9dd849bb-b4xzs\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:15 crc kubenswrapper[4834]: I0121 14:52:15.738262 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.350799 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e3b5fdd-2a9d-4750-8013-081c1b410875" path="/var/lib/kubelet/pods/9e3b5fdd-2a9d-4750-8013-081c1b410875/volumes" Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.352796 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-577b97bbf9-tdqtw"] Jan 21 14:52:16 crc kubenswrapper[4834]: W0121 14:52:16.367285 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c4223c5_ca7c_4eb7_a6d6_fc7f4c9dd9e9.slice/crio-0b0e03b6e28c230eadd75717a4ce66633d5d675124e510dff72fbe45294707ea WatchSource:0}: Error finding container 0b0e03b6e28c230eadd75717a4ce66633d5d675124e510dff72fbe45294707ea: Status 404 returned error can't find the container with id 0b0e03b6e28c230eadd75717a4ce66633d5d675124e510dff72fbe45294707ea Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.496724 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5nwpj" event={"ID":"2907abf2-1f9d-497d-bfb3-bf4094e7c174","Type":"ContainerStarted","Data":"0aa4a9c8d5305f703392b0cef4cde0df1d32d6bc28fb27efa810c7fbaf538f33"} Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.503728 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc2d5700-1644-4504-aae6-8bcf6c87363f","Type":"ContainerStarted","Data":"d103dd9297c1d2baee70d0c4f7d9b78e333f4f0afc7a6a14c41398f6488db835"} Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.514970 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-577b97bbf9-tdqtw" event={"ID":"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9","Type":"ContainerStarted","Data":"0b0e03b6e28c230eadd75717a4ce66633d5d675124e510dff72fbe45294707ea"} Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.527607 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-5nwpj" podStartSLOduration=3.354064664 podStartE2EDuration="42.52758568s" podCreationTimestamp="2026-01-21 14:51:34 +0000 UTC" firstStartedPulling="2026-01-21 14:51:35.686563028 +0000 UTC m=+1241.660912073" lastFinishedPulling="2026-01-21 14:52:14.860084044 +0000 UTC m=+1280.834433089" observedRunningTime="2026-01-21 14:52:16.520146979 +0000 UTC m=+1282.494496044" watchObservedRunningTime="2026-01-21 14:52:16.52758568 +0000 UTC m=+1282.501934725" Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.529386 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67626626-3343-484b-9c1c-6d7bee71821f","Type":"ContainerStarted","Data":"1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8"} Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.726388 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-qth6j"] Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.788455 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69bb684bc8-6s7qv"] Jan 21 14:52:16 crc kubenswrapper[4834]: I0121 14:52:16.893160 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f9dd849bb-b4xzs"] Jan 21 14:52:16 crc kubenswrapper[4834]: W0121 14:52:16.909891 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8066f26_fd07_4d6c_bd1b_44664f2a091b.slice/crio-aacf58f997d39bde0ffc42f7e25cdb3cf5e547683c378882c3c533749789419b WatchSource:0}: Error finding container aacf58f997d39bde0ffc42f7e25cdb3cf5e547683c378882c3c533749789419b: Status 404 returned error can't find the container with id aacf58f997d39bde0ffc42f7e25cdb3cf5e547683c378882c3c533749789419b Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.113993 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.114484 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.614371 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67626626-3343-484b-9c1c-6d7bee71821f","Type":"ContainerStarted","Data":"a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39"} Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.633666 4834 generic.go:334] "Generic (PLEG): container finished" podID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerID="201d6ae3b6cfa218d257559e9de5bc47af9c54e22c295e20c2930d30dca03f6d" exitCode=0 Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.633767 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-qth6j" event={"ID":"757e81d8-6937-4a5a-8ebb-8af24d465dc7","Type":"ContainerDied","Data":"201d6ae3b6cfa218d257559e9de5bc47af9c54e22c295e20c2930d30dca03f6d"} Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.633814 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-qth6j" event={"ID":"757e81d8-6937-4a5a-8ebb-8af24d465dc7","Type":"ContainerStarted","Data":"5309775e84c8ecc11fa7dd8286e72b861a0d691459ec6e4872b37bd7c5bec3dc"} Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.645006 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9dd849bb-b4xzs" event={"ID":"b8066f26-fd07-4d6c-bd1b-44664f2a091b","Type":"ContainerStarted","Data":"aacf58f997d39bde0ffc42f7e25cdb3cf5e547683c378882c3c533749789419b"} Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.657437 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.657405789 podStartE2EDuration="10.657405789s" podCreationTimestamp="2026-01-21 14:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:17.650257896 +0000 UTC m=+1283.624606971" watchObservedRunningTime="2026-01-21 14:52:17.657405789 +0000 UTC m=+1283.631754834" Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.667695 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8ctw" event={"ID":"13827989-07c5-4417-9be2-574fbca9ddbb","Type":"ContainerStarted","Data":"9e7c05197361670761eb6161d2d7eaa43bcc0eac69d4f23d319dcb70cf798a41"} Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.682959 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc2d5700-1644-4504-aae6-8bcf6c87363f","Type":"ContainerStarted","Data":"88a31cfa48c85b0165aa88edab452893c52414c0c101f61a9dfa58c255844c9d"} Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.696985 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-577b97bbf9-tdqtw" event={"ID":"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9","Type":"ContainerStarted","Data":"0d518f3b26bced1d0a24c01a084dcae54af88409651e0fec0bea0e6b5762886a"} Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.698104 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.700226 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69bb684bc8-6s7qv" event={"ID":"46ef0752-abe1-465f-8b0b-77906b861c12","Type":"ContainerStarted","Data":"637192b126c7873039ac5b1bfa43915a4f297b8c2c14190ae08138d21215676f"} Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.700260 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69bb684bc8-6s7qv" event={"ID":"46ef0752-abe1-465f-8b0b-77906b861c12","Type":"ContainerStarted","Data":"b11b36d85d8042425c67f78395ba2222c7ab2375a09a39701f3b6d4b18f6d425"} Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.733361 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-x8ctw" podStartSLOduration=4.390669874 podStartE2EDuration="43.733320228s" podCreationTimestamp="2026-01-21 14:51:34 +0000 UTC" firstStartedPulling="2026-01-21 14:51:35.51646865 +0000 UTC m=+1241.490817695" lastFinishedPulling="2026-01-21 14:52:14.859119004 +0000 UTC m=+1280.833468049" observedRunningTime="2026-01-21 14:52:17.710560572 +0000 UTC m=+1283.684909617" watchObservedRunningTime="2026-01-21 14:52:17.733320228 +0000 UTC m=+1283.707669283" Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.767400 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-577b97bbf9-tdqtw" podStartSLOduration=3.767365877 podStartE2EDuration="3.767365877s" podCreationTimestamp="2026-01-21 14:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:17.743130894 +0000 UTC m=+1283.717479949" watchObservedRunningTime="2026-01-21 14:52:17.767365877 +0000 UTC m=+1283.741714922" Jan 21 14:52:17 crc kubenswrapper[4834]: I0121 14:52:17.805774 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.805739421 podStartE2EDuration="10.805739421s" podCreationTimestamp="2026-01-21 14:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:17.791474737 +0000 UTC m=+1283.765823782" watchObservedRunningTime="2026-01-21 14:52:17.805739421 +0000 UTC m=+1283.780088466" Jan 21 14:52:18 crc kubenswrapper[4834]: I0121 14:52:18.742846 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9dd849bb-b4xzs" event={"ID":"b8066f26-fd07-4d6c-bd1b-44664f2a091b","Type":"ContainerStarted","Data":"65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe"} Jan 21 14:52:18 crc kubenswrapper[4834]: I0121 14:52:18.743424 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9dd849bb-b4xzs" event={"ID":"b8066f26-fd07-4d6c-bd1b-44664f2a091b","Type":"ContainerStarted","Data":"e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777"} Jan 21 14:52:18 crc kubenswrapper[4834]: I0121 14:52:18.745775 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:52:18 crc kubenswrapper[4834]: I0121 14:52:18.754464 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69bb684bc8-6s7qv" event={"ID":"46ef0752-abe1-465f-8b0b-77906b861c12","Type":"ContainerStarted","Data":"0b54aa54647967f433ead26be60f5ecca8009300967e54ee0684394f6f2cdd0b"} Jan 21 14:52:18 crc kubenswrapper[4834]: I0121 14:52:18.778834 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f9dd849bb-b4xzs" podStartSLOduration=3.778811455 podStartE2EDuration="3.778811455s" podCreationTimestamp="2026-01-21 14:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:18.765325296 +0000 UTC m=+1284.739674361" watchObservedRunningTime="2026-01-21 14:52:18.778811455 +0000 UTC m=+1284.753160500" Jan 21 14:52:18 crc kubenswrapper[4834]: I0121 14:52:18.811290 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-69bb684bc8-6s7qv" podStartSLOduration=3.811256494 podStartE2EDuration="3.811256494s" podCreationTimestamp="2026-01-21 14:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:18.807447576 +0000 UTC m=+1284.781796621" watchObservedRunningTime="2026-01-21 14:52:18.811256494 +0000 UTC m=+1284.785605539" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.015574 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-87996dbdf-vzvsk"] Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.017673 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.020460 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.020643 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.040769 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-87996dbdf-vzvsk"] Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.140264 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-internal-tls-certs\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.140324 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-ovndb-tls-certs\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.140425 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-combined-ca-bundle\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.140769 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-public-tls-certs\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.140831 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-httpd-config\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.140957 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-config\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.141162 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdgj\" (UniqueName: \"kubernetes.io/projected/507328e4-20c4-4e84-b781-e4889419607e-kube-api-access-7zdgj\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.243463 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-public-tls-certs\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.243540 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-httpd-config\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.243591 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-config\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.243625 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdgj\" (UniqueName: \"kubernetes.io/projected/507328e4-20c4-4e84-b781-e4889419607e-kube-api-access-7zdgj\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.243704 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-internal-tls-certs\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.243736 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-ovndb-tls-certs\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.243801 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-combined-ca-bundle\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.250708 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-combined-ca-bundle\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.250716 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-internal-tls-certs\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.253091 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-public-tls-certs\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.253644 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-httpd-config\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.254015 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-config\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.255615 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-ovndb-tls-certs\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.271711 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdgj\" (UniqueName: \"kubernetes.io/projected/507328e4-20c4-4e84-b781-e4889419607e-kube-api-access-7zdgj\") pod \"neutron-87996dbdf-vzvsk\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.344581 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.769272 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-qth6j" event={"ID":"757e81d8-6937-4a5a-8ebb-8af24d465dc7","Type":"ContainerStarted","Data":"3a0ffcd8d5017104752144e14cf4b864e0f28f97ab625a5895865b99a01ff19d"} Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.771494 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.771554 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:19 crc kubenswrapper[4834]: I0121 14:52:19.826132 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685444497c-qth6j" podStartSLOduration=4.826082657 podStartE2EDuration="4.826082657s" podCreationTimestamp="2026-01-21 14:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:19.793269097 +0000 UTC m=+1285.767618162" watchObservedRunningTime="2026-01-21 14:52:19.826082657 +0000 UTC m=+1285.800431702" Jan 21 14:52:20 crc kubenswrapper[4834]: I0121 14:52:20.022665 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-87996dbdf-vzvsk"] Jan 21 14:52:20 crc kubenswrapper[4834]: W0121 14:52:20.029234 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod507328e4_20c4_4e84_b781_e4889419607e.slice/crio-fe157082c85ca69b545df4d7e709257dac1a932dabc26e0aa4e5dbff158eced6 WatchSource:0}: Error finding container fe157082c85ca69b545df4d7e709257dac1a932dabc26e0aa4e5dbff158eced6: Status 404 returned error can't find the container with id fe157082c85ca69b545df4d7e709257dac1a932dabc26e0aa4e5dbff158eced6 Jan 21 14:52:20 crc kubenswrapper[4834]: I0121 14:52:20.621668 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:20 crc kubenswrapper[4834]: I0121 14:52:20.798833 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-87996dbdf-vzvsk" event={"ID":"507328e4-20c4-4e84-b781-e4889419607e","Type":"ContainerStarted","Data":"5021b65d7845ef850fa0bbc7c486df5fe601a95715feb8020d57c8b3cb5c5c7e"} Jan 21 14:52:20 crc kubenswrapper[4834]: I0121 14:52:20.798942 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-87996dbdf-vzvsk" event={"ID":"507328e4-20c4-4e84-b781-e4889419607e","Type":"ContainerStarted","Data":"fe157082c85ca69b545df4d7e709257dac1a932dabc26e0aa4e5dbff158eced6"} Jan 21 14:52:21 crc kubenswrapper[4834]: I0121 14:52:21.811745 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-87996dbdf-vzvsk" event={"ID":"507328e4-20c4-4e84-b781-e4889419607e","Type":"ContainerStarted","Data":"0753d72f76c2ebcc2cbf477c10a770f53df938062ba6df9b3f55fe8788125c99"} Jan 21 14:52:21 crc kubenswrapper[4834]: I0121 14:52:21.812223 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:52:21 crc kubenswrapper[4834]: I0121 14:52:21.848057 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-87996dbdf-vzvsk" podStartSLOduration=3.848029934 podStartE2EDuration="3.848029934s" podCreationTimestamp="2026-01-21 14:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:21.835498274 +0000 UTC m=+1287.809847329" watchObservedRunningTime="2026-01-21 14:52:21.848029934 +0000 UTC m=+1287.822378979" Jan 21 14:52:25 crc kubenswrapper[4834]: I0121 14:52:25.623463 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:52:25 crc kubenswrapper[4834]: I0121 14:52:25.695559 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rg9z9"] Jan 21 14:52:25 crc kubenswrapper[4834]: I0121 14:52:25.695828 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" containerName="dnsmasq-dns" containerID="cri-o://238e9220c679c8123c7b1326da2371187ff7c09c525c7f859da1fe9356a819b6" gracePeriod=10 Jan 21 14:52:25 crc kubenswrapper[4834]: I0121 14:52:25.816784 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Jan 21 14:52:26 crc kubenswrapper[4834]: I0121 14:52:26.860004 4834 generic.go:334] "Generic (PLEG): container finished" podID="dd730c74-508a-455b-b4f2-4533c126bf96" containerID="238e9220c679c8123c7b1326da2371187ff7c09c525c7f859da1fe9356a819b6" exitCode=0 Jan 21 14:52:26 crc kubenswrapper[4834]: I0121 14:52:26.860086 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" event={"ID":"dd730c74-508a-455b-b4f2-4533c126bf96","Type":"ContainerDied","Data":"238e9220c679c8123c7b1326da2371187ff7c09c525c7f859da1fe9356a819b6"} Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.580077 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.580148 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.593329 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.593383 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.617321 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.670715 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.681658 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.710130 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.875174 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.875455 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.875506 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:52:27 crc kubenswrapper[4834]: I0121 14:52:27.875522 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:30 crc kubenswrapper[4834]: I0121 14:52:30.291569 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:30 crc kubenswrapper[4834]: I0121 14:52:30.292149 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:52:30 crc kubenswrapper[4834]: I0121 14:52:30.337086 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:52:30 crc kubenswrapper[4834]: I0121 14:52:30.337379 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:52:30 crc kubenswrapper[4834]: I0121 14:52:30.337804 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:52:30 crc kubenswrapper[4834]: I0121 14:52:30.404666 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:52:30 crc kubenswrapper[4834]: I0121 14:52:30.816063 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Jan 21 14:52:35 crc kubenswrapper[4834]: E0121 14:52:35.826447 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24@sha256:58b583bb82da64c3c962ed2ca5e60dfff0fc93e50a9ec95e650cecb3a6cb8fda" Jan 21 14:52:35 crc kubenswrapper[4834]: E0121 14:52:35.827267 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:58b583bb82da64c3c962ed2ca5e60dfff0fc93e50a9ec95e650cecb3a6cb8fda,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx2gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(39e7e8b6-f56f-4c87-9f77-9969923cbe27): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 14:52:35 crc kubenswrapper[4834]: E0121 14:52:35.828529 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" Jan 21 14:52:35 crc kubenswrapper[4834]: I0121 14:52:35.961165 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" event={"ID":"dd730c74-508a-455b-b4f2-4533c126bf96","Type":"ContainerDied","Data":"b323ad3eb03422ac5af5f2b1eb5cf2943eb0cb0be145b3f935d799f490e9ce18"} Jan 21 14:52:35 crc kubenswrapper[4834]: I0121 14:52:35.961228 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b323ad3eb03422ac5af5f2b1eb5cf2943eb0cb0be145b3f935d799f490e9ce18" Jan 21 14:52:35 crc kubenswrapper[4834]: I0121 14:52:35.961412 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerName="ceilometer-notification-agent" containerID="cri-o://5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2" gracePeriod=30 Jan 21 14:52:35 crc kubenswrapper[4834]: I0121 14:52:35.961448 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerName="sg-core" containerID="cri-o://89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72" gracePeriod=30 Jan 21 14:52:35 crc kubenswrapper[4834]: I0121 14:52:35.973172 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.128011 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-config\") pod \"dd730c74-508a-455b-b4f2-4533c126bf96\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.128622 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plkwr\" (UniqueName: \"kubernetes.io/projected/dd730c74-508a-455b-b4f2-4533c126bf96-kube-api-access-plkwr\") pod \"dd730c74-508a-455b-b4f2-4533c126bf96\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.128658 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-swift-storage-0\") pod \"dd730c74-508a-455b-b4f2-4533c126bf96\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.128704 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-svc\") pod \"dd730c74-508a-455b-b4f2-4533c126bf96\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.128759 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-nb\") pod \"dd730c74-508a-455b-b4f2-4533c126bf96\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.128873 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-sb\") pod \"dd730c74-508a-455b-b4f2-4533c126bf96\" (UID: \"dd730c74-508a-455b-b4f2-4533c126bf96\") " Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.136069 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd730c74-508a-455b-b4f2-4533c126bf96-kube-api-access-plkwr" (OuterVolumeSpecName: "kube-api-access-plkwr") pod "dd730c74-508a-455b-b4f2-4533c126bf96" (UID: "dd730c74-508a-455b-b4f2-4533c126bf96"). InnerVolumeSpecName "kube-api-access-plkwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.182107 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-config" (OuterVolumeSpecName: "config") pod "dd730c74-508a-455b-b4f2-4533c126bf96" (UID: "dd730c74-508a-455b-b4f2-4533c126bf96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.182150 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd730c74-508a-455b-b4f2-4533c126bf96" (UID: "dd730c74-508a-455b-b4f2-4533c126bf96"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.183499 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dd730c74-508a-455b-b4f2-4533c126bf96" (UID: "dd730c74-508a-455b-b4f2-4533c126bf96"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.187980 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd730c74-508a-455b-b4f2-4533c126bf96" (UID: "dd730c74-508a-455b-b4f2-4533c126bf96"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.195626 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd730c74-508a-455b-b4f2-4533c126bf96" (UID: "dd730c74-508a-455b-b4f2-4533c126bf96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.231441 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.231490 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.231504 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plkwr\" (UniqueName: \"kubernetes.io/projected/dd730c74-508a-455b-b4f2-4533c126bf96-kube-api-access-plkwr\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.231516 4834 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.231527 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.231538 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd730c74-508a-455b-b4f2-4533c126bf96-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.970747 4834 generic.go:334] "Generic (PLEG): container finished" podID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerID="89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72" exitCode=2 Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.970844 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.970850 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e7e8b6-f56f-4c87-9f77-9969923cbe27","Type":"ContainerDied","Data":"89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72"} Jan 21 14:52:36 crc kubenswrapper[4834]: I0121 14:52:36.995409 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rg9z9"] Jan 21 14:52:37 crc kubenswrapper[4834]: I0121 14:52:37.003761 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rg9z9"] Jan 21 14:52:38 crc kubenswrapper[4834]: I0121 14:52:38.339391 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" path="/var/lib/kubelet/pods/dd730c74-508a-455b-b4f2-4533c126bf96/volumes" Jan 21 14:52:39 crc kubenswrapper[4834]: I0121 14:52:39.977739 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.023536 4834 generic.go:334] "Generic (PLEG): container finished" podID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerID="5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2" exitCode=0 Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.023602 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e7e8b6-f56f-4c87-9f77-9969923cbe27","Type":"ContainerDied","Data":"5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2"} Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.023636 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39e7e8b6-f56f-4c87-9f77-9969923cbe27","Type":"ContainerDied","Data":"df7c8df28f9144f756ea5fe91582865be91b26c702281417de012c2215d4572e"} Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.023653 4834 scope.go:117] "RemoveContainer" containerID="89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.023832 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.053798 4834 scope.go:117] "RemoveContainer" containerID="5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.077892 4834 scope.go:117] "RemoveContainer" containerID="89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72" Jan 21 14:52:40 crc kubenswrapper[4834]: E0121 14:52:40.078438 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72\": container with ID starting with 89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72 not found: ID does not exist" containerID="89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.078488 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72"} err="failed to get container status \"89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72\": rpc error: code = NotFound desc = could not find container \"89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72\": container with ID starting with 89e44b6c0814a3a3bd57abb5a2bb7048676880a1920f40bc6ca5242dc888ba72 not found: ID does not exist" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.078517 4834 scope.go:117] "RemoveContainer" containerID="5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2" Jan 21 14:52:40 crc kubenswrapper[4834]: E0121 14:52:40.078895 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2\": container with ID starting with 5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2 not found: ID does not exist" containerID="5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.078948 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2"} err="failed to get container status \"5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2\": rpc error: code = NotFound desc = could not find container \"5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2\": container with ID starting with 5df395ce205e5f810eaee221d8b59a4b11a2ed60eba723bb0f90393a58a29da2 not found: ID does not exist" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.108638 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-config-data\") pod \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.108708 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-scripts\") pod \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.108774 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-combined-ca-bundle\") pod \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.108814 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-run-httpd\") pod \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.108859 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-sg-core-conf-yaml\") pod \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.108966 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx2gt\" (UniqueName: \"kubernetes.io/projected/39e7e8b6-f56f-4c87-9f77-9969923cbe27-kube-api-access-dx2gt\") pod \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.109084 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-log-httpd\") pod \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\" (UID: \"39e7e8b6-f56f-4c87-9f77-9969923cbe27\") " Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.109389 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "39e7e8b6-f56f-4c87-9f77-9969923cbe27" (UID: "39e7e8b6-f56f-4c87-9f77-9969923cbe27"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.109815 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "39e7e8b6-f56f-4c87-9f77-9969923cbe27" (UID: "39e7e8b6-f56f-4c87-9f77-9969923cbe27"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.115631 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e7e8b6-f56f-4c87-9f77-9969923cbe27-kube-api-access-dx2gt" (OuterVolumeSpecName: "kube-api-access-dx2gt") pod "39e7e8b6-f56f-4c87-9f77-9969923cbe27" (UID: "39e7e8b6-f56f-4c87-9f77-9969923cbe27"). InnerVolumeSpecName "kube-api-access-dx2gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.122203 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-scripts" (OuterVolumeSpecName: "scripts") pod "39e7e8b6-f56f-4c87-9f77-9969923cbe27" (UID: "39e7e8b6-f56f-4c87-9f77-9969923cbe27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.138671 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "39e7e8b6-f56f-4c87-9f77-9969923cbe27" (UID: "39e7e8b6-f56f-4c87-9f77-9969923cbe27"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.139978 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39e7e8b6-f56f-4c87-9f77-9969923cbe27" (UID: "39e7e8b6-f56f-4c87-9f77-9969923cbe27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.144809 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-config-data" (OuterVolumeSpecName: "config-data") pod "39e7e8b6-f56f-4c87-9f77-9969923cbe27" (UID: "39e7e8b6-f56f-4c87-9f77-9969923cbe27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.210715 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.210757 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.210767 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.210778 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.210787 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39e7e8b6-f56f-4c87-9f77-9969923cbe27-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.210796 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx2gt\" (UniqueName: \"kubernetes.io/projected/39e7e8b6-f56f-4c87-9f77-9969923cbe27-kube-api-access-dx2gt\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.210805 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39e7e8b6-f56f-4c87-9f77-9969923cbe27-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.389996 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.400055 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.414226 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:40 crc kubenswrapper[4834]: E0121 14:52:40.414725 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerName="ceilometer-notification-agent" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.414746 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerName="ceilometer-notification-agent" Jan 21 14:52:40 crc kubenswrapper[4834]: E0121 14:52:40.414768 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" containerName="dnsmasq-dns" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.414778 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" containerName="dnsmasq-dns" Jan 21 14:52:40 crc kubenswrapper[4834]: E0121 14:52:40.414825 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerName="sg-core" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.414833 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerName="sg-core" Jan 21 14:52:40 crc kubenswrapper[4834]: E0121 14:52:40.414847 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" containerName="init" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.414854 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" containerName="init" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.415113 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" containerName="dnsmasq-dns" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.415165 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerName="sg-core" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.415196 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" containerName="ceilometer-notification-agent" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.418838 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.421319 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.421612 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.441589 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.515973 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29jvv\" (UniqueName: \"kubernetes.io/projected/9b7d697c-0f23-4ea5-b8eb-2735d019c579-kube-api-access-29jvv\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.516074 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-scripts\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.516104 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.516119 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-log-httpd\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.516135 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-run-httpd\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.516155 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-config-data\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.516172 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.617894 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29jvv\" (UniqueName: \"kubernetes.io/projected/9b7d697c-0f23-4ea5-b8eb-2735d019c579-kube-api-access-29jvv\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.618025 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-scripts\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.618059 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.618075 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-log-httpd\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.618093 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-run-httpd\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.618113 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-config-data\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.618132 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.619206 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-run-httpd\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.618917 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-log-httpd\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.624216 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-scripts\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.625197 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.628842 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.631686 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-config-data\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.637459 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29jvv\" (UniqueName: \"kubernetes.io/projected/9b7d697c-0f23-4ea5-b8eb-2735d019c579-kube-api-access-29jvv\") pod \"ceilometer-0\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.748131 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:52:40 crc kubenswrapper[4834]: I0121 14:52:40.817407 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f8cb849-rg9z9" podUID="dd730c74-508a-455b-b4f2-4533c126bf96" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Jan 21 14:52:41 crc kubenswrapper[4834]: W0121 14:52:41.240182 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b7d697c_0f23_4ea5_b8eb_2735d019c579.slice/crio-ff2b9e238339f3be70587acb0ee8ed13b55bee934608b7146b3ac8d3983abe6b WatchSource:0}: Error finding container ff2b9e238339f3be70587acb0ee8ed13b55bee934608b7146b3ac8d3983abe6b: Status 404 returned error can't find the container with id ff2b9e238339f3be70587acb0ee8ed13b55bee934608b7146b3ac8d3983abe6b Jan 21 14:52:41 crc kubenswrapper[4834]: I0121 14:52:41.241421 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:52:42 crc kubenswrapper[4834]: I0121 14:52:42.048520 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerStarted","Data":"ff2b9e238339f3be70587acb0ee8ed13b55bee934608b7146b3ac8d3983abe6b"} Jan 21 14:52:42 crc kubenswrapper[4834]: I0121 14:52:42.337056 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e7e8b6-f56f-4c87-9f77-9969923cbe27" path="/var/lib/kubelet/pods/39e7e8b6-f56f-4c87-9f77-9969923cbe27/volumes" Jan 21 14:52:44 crc kubenswrapper[4834]: I0121 14:52:44.070179 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerStarted","Data":"dc169311afe3db8ca15db4d240544cf54976b924f04fb5da4fff06947d2e0efa"} Jan 21 14:52:45 crc kubenswrapper[4834]: I0121 14:52:45.746839 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5f9dd849bb-b4xzs" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 14:52:45 crc kubenswrapper[4834]: I0121 14:52:45.747965 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5f9dd849bb-b4xzs" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 14:52:45 crc kubenswrapper[4834]: I0121 14:52:45.748588 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5f9dd849bb-b4xzs" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 14:52:45 crc kubenswrapper[4834]: I0121 14:52:45.777413 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:52:46 crc kubenswrapper[4834]: I0121 14:52:46.096981 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerStarted","Data":"ac92aa1ee28b63ff7a8f90fadfabf79c9231af0774d8143c0f746ce691c16513"} Jan 21 14:52:47 crc kubenswrapper[4834]: I0121 14:52:47.114331 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:52:47 crc kubenswrapper[4834]: I0121 14:52:47.114766 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:52:47 crc kubenswrapper[4834]: I0121 14:52:47.370187 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:47 crc kubenswrapper[4834]: I0121 14:52:47.496908 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:52:48 crc kubenswrapper[4834]: I0121 14:52:48.121133 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerStarted","Data":"f5c19b7aae428c88697dd775ffecf53214835dd11d9d1722c0805fa542895d55"} Jan 21 14:52:48 crc kubenswrapper[4834]: I0121 14:52:48.270419 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:52:49 crc kubenswrapper[4834]: I0121 14:52:49.134521 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerStarted","Data":"95707426bce03d4a84e74565e8448046ded898ff61888ea4ee07087e4b80938e"} Jan 21 14:52:49 crc kubenswrapper[4834]: I0121 14:52:49.135149 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:52:49 crc kubenswrapper[4834]: I0121 14:52:49.138743 4834 generic.go:334] "Generic (PLEG): container finished" podID="2907abf2-1f9d-497d-bfb3-bf4094e7c174" containerID="0aa4a9c8d5305f703392b0cef4cde0df1d32d6bc28fb27efa810c7fbaf538f33" exitCode=0 Jan 21 14:52:49 crc kubenswrapper[4834]: I0121 14:52:49.138840 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5nwpj" event={"ID":"2907abf2-1f9d-497d-bfb3-bf4094e7c174","Type":"ContainerDied","Data":"0aa4a9c8d5305f703392b0cef4cde0df1d32d6bc28fb27efa810c7fbaf538f33"} Jan 21 14:52:49 crc kubenswrapper[4834]: I0121 14:52:49.165908 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.675398951 podStartE2EDuration="9.165875788s" podCreationTimestamp="2026-01-21 14:52:40 +0000 UTC" firstStartedPulling="2026-01-21 14:52:41.243782067 +0000 UTC m=+1307.218131122" lastFinishedPulling="2026-01-21 14:52:48.734258914 +0000 UTC m=+1314.708607959" observedRunningTime="2026-01-21 14:52:49.156185967 +0000 UTC m=+1315.130535002" watchObservedRunningTime="2026-01-21 14:52:49.165875788 +0000 UTC m=+1315.140224833" Jan 21 14:52:49 crc kubenswrapper[4834]: I0121 14:52:49.356328 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-87996dbdf-vzvsk" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 14:52:49 crc kubenswrapper[4834]: I0121 14:52:49.361013 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-87996dbdf-vzvsk" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 14:52:49 crc kubenswrapper[4834]: I0121 14:52:49.368862 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-87996dbdf-vzvsk" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.527891 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.646603 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-combined-ca-bundle\") pod \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.647062 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmnkt\" (UniqueName: \"kubernetes.io/projected/2907abf2-1f9d-497d-bfb3-bf4094e7c174-kube-api-access-fmnkt\") pod \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.647270 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-db-sync-config-data\") pod \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\" (UID: \"2907abf2-1f9d-497d-bfb3-bf4094e7c174\") " Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.659282 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2907abf2-1f9d-497d-bfb3-bf4094e7c174" (UID: "2907abf2-1f9d-497d-bfb3-bf4094e7c174"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.675229 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2907abf2-1f9d-497d-bfb3-bf4094e7c174-kube-api-access-fmnkt" (OuterVolumeSpecName: "kube-api-access-fmnkt") pod "2907abf2-1f9d-497d-bfb3-bf4094e7c174" (UID: "2907abf2-1f9d-497d-bfb3-bf4094e7c174"). InnerVolumeSpecName "kube-api-access-fmnkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.778004 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmnkt\" (UniqueName: \"kubernetes.io/projected/2907abf2-1f9d-497d-bfb3-bf4094e7c174-kube-api-access-fmnkt\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.778054 4834 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.797239 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2907abf2-1f9d-497d-bfb3-bf4094e7c174" (UID: "2907abf2-1f9d-497d-bfb3-bf4094e7c174"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:50 crc kubenswrapper[4834]: I0121 14:52:50.880085 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2907abf2-1f9d-497d-bfb3-bf4094e7c174-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.137645 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 14:52:51 crc kubenswrapper[4834]: E0121 14:52:51.138108 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2907abf2-1f9d-497d-bfb3-bf4094e7c174" containerName="barbican-db-sync" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.138123 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2907abf2-1f9d-497d-bfb3-bf4094e7c174" containerName="barbican-db-sync" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.138266 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2907abf2-1f9d-497d-bfb3-bf4094e7c174" containerName="barbican-db-sync" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.138969 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.142050 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.142179 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.143446 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qrw7v" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.159210 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5nwpj" event={"ID":"2907abf2-1f9d-497d-bfb3-bf4094e7c174","Type":"ContainerDied","Data":"478feb9bed9d69e2ff284a1486bed4de2e91b35aa61e5b1ce974ac7d2cb30bd4"} Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.159545 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="478feb9bed9d69e2ff284a1486bed4de2e91b35aa61e5b1ce974ac7d2cb30bd4" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.159641 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.159577 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5nwpj" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.229107 4834 generic.go:334] "Generic (PLEG): container finished" podID="13827989-07c5-4417-9be2-574fbca9ddbb" containerID="9e7c05197361670761eb6161d2d7eaa43bcc0eac69d4f23d319dcb70cf798a41" exitCode=0 Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.229190 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8ctw" event={"ID":"13827989-07c5-4417-9be2-574fbca9ddbb","Type":"ContainerDied","Data":"9e7c05197361670761eb6161d2d7eaa43bcc0eac69d4f23d319dcb70cf798a41"} Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.326328 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.326402 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config-secret\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.326437 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfwzt\" (UniqueName: \"kubernetes.io/projected/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-kube-api-access-lfwzt\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.326609 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.428376 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.428525 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.428568 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config-secret\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.428602 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfwzt\" (UniqueName: \"kubernetes.io/projected/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-kube-api-access-lfwzt\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.429555 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.437274 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.437563 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config-secret\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.495776 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfwzt\" (UniqueName: \"kubernetes.io/projected/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-kube-api-access-lfwzt\") pod \"openstackclient\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.519745 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d5d49578b-z9xbl"] Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.521582 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.529055 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.529274 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.529448 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wjsjv" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.537889 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.550988 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5696b4bbb9-8l4r8"] Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.552625 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.559217 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.574968 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d5d49578b-z9xbl"] Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.628006 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5696b4bbb9-8l4r8"] Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.634152 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cf94c8-2d73-4940-a873-775f2cba8ce5-logs\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.634245 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.634314 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data-custom\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.634346 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfk5m\" (UniqueName: \"kubernetes.io/projected/67cf94c8-2d73-4940-a873-775f2cba8ce5-kube-api-access-wfk5m\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.634421 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-combined-ca-bundle\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.679272 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-4fvnn"] Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.681216 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.693439 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-4fvnn"] Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736499 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data-custom\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736556 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736582 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpjd8\" (UniqueName: \"kubernetes.io/projected/84309501-c399-4d83-9876-00b58ba67b0d-kube-api-access-zpjd8\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736607 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfk5m\" (UniqueName: \"kubernetes.io/projected/67cf94c8-2d73-4940-a873-775f2cba8ce5-kube-api-access-wfk5m\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736634 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-combined-ca-bundle\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736656 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data-custom\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736680 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-combined-ca-bundle\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736723 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cf94c8-2d73-4940-a873-775f2cba8ce5-logs\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736772 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.736797 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84309501-c399-4d83-9876-00b58ba67b0d-logs\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.739421 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cf94c8-2d73-4940-a873-775f2cba8ce5-logs\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.743642 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-combined-ca-bundle\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.743811 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data-custom\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.745368 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.768858 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfk5m\" (UniqueName: \"kubernetes.io/projected/67cf94c8-2d73-4940-a873-775f2cba8ce5-kube-api-access-wfk5m\") pod \"barbican-keystone-listener-5d5d49578b-z9xbl\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.830379 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-87c79555d-cxwgx"] Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.831978 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.835388 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840026 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84309501-c399-4d83-9876-00b58ba67b0d-logs\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840097 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-config\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840167 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840195 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpjd8\" (UniqueName: \"kubernetes.io/projected/84309501-c399-4d83-9876-00b58ba67b0d-kube-api-access-zpjd8\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840228 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd6nh\" (UniqueName: \"kubernetes.io/projected/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-kube-api-access-dd6nh\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840273 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840297 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data-custom\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840318 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-combined-ca-bundle\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840346 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840375 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840408 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.840830 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84309501-c399-4d83-9876-00b58ba67b0d-logs\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.850966 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-87c79555d-cxwgx"] Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.855272 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.855598 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data-custom\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.855762 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-combined-ca-bundle\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.876490 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpjd8\" (UniqueName: \"kubernetes.io/projected/84309501-c399-4d83-9876-00b58ba67b0d-kube-api-access-zpjd8\") pod \"barbican-worker-5696b4bbb9-8l4r8\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942207 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data-custom\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942261 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942293 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-config\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942365 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd6nh\" (UniqueName: \"kubernetes.io/projected/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-kube-api-access-dd6nh\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942391 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942425 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942444 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-combined-ca-bundle\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942474 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942503 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-logs\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942525 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmzgr\" (UniqueName: \"kubernetes.io/projected/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-kube-api-access-zmzgr\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.942547 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.943617 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.943768 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-config\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.944378 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.946997 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.950566 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.972976 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd6nh\" (UniqueName: \"kubernetes.io/projected/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-kube-api-access-dd6nh\") pod \"dnsmasq-dns-66cdd4b5b5-4fvnn\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:51 crc kubenswrapper[4834]: I0121 14:52:51.994657 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.015540 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.035249 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.045089 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-logs\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.045176 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmzgr\" (UniqueName: \"kubernetes.io/projected/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-kube-api-access-zmzgr\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.045278 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data-custom\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.045320 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.045461 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-combined-ca-bundle\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.046491 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-logs\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.057378 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data-custom\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.066182 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.072135 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-combined-ca-bundle\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.072554 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmzgr\" (UniqueName: \"kubernetes.io/projected/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-kube-api-access-zmzgr\") pod \"barbican-api-87c79555d-cxwgx\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.172652 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.215527 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.559701 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d5d49578b-z9xbl"] Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.676681 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5696b4bbb9-8l4r8"] Jan 21 14:52:52 crc kubenswrapper[4834]: W0121 14:52:52.707076 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84309501_c399_4d83_9876_00b58ba67b0d.slice/crio-8efef12bdabeffe63c9009bb5dce3f5a5ad7667c43a921d22db342d8beb3c8f7 WatchSource:0}: Error finding container 8efef12bdabeffe63c9009bb5dce3f5a5ad7667c43a921d22db342d8beb3c8f7: Status 404 returned error can't find the container with id 8efef12bdabeffe63c9009bb5dce3f5a5ad7667c43a921d22db342d8beb3c8f7 Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.842036 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.882608 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6v6k\" (UniqueName: \"kubernetes.io/projected/13827989-07c5-4417-9be2-574fbca9ddbb-kube-api-access-g6v6k\") pod \"13827989-07c5-4417-9be2-574fbca9ddbb\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.882850 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-scripts\") pod \"13827989-07c5-4417-9be2-574fbca9ddbb\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.883773 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-config-data\") pod \"13827989-07c5-4417-9be2-574fbca9ddbb\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.883827 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-combined-ca-bundle\") pod \"13827989-07c5-4417-9be2-574fbca9ddbb\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.883918 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13827989-07c5-4417-9be2-574fbca9ddbb-etc-machine-id\") pod \"13827989-07c5-4417-9be2-574fbca9ddbb\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.884078 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-db-sync-config-data\") pod \"13827989-07c5-4417-9be2-574fbca9ddbb\" (UID: \"13827989-07c5-4417-9be2-574fbca9ddbb\") " Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.888890 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13827989-07c5-4417-9be2-574fbca9ddbb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "13827989-07c5-4417-9be2-574fbca9ddbb" (UID: "13827989-07c5-4417-9be2-574fbca9ddbb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.907575 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13827989-07c5-4417-9be2-574fbca9ddbb-kube-api-access-g6v6k" (OuterVolumeSpecName: "kube-api-access-g6v6k") pod "13827989-07c5-4417-9be2-574fbca9ddbb" (UID: "13827989-07c5-4417-9be2-574fbca9ddbb"). InnerVolumeSpecName "kube-api-access-g6v6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.907955 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "13827989-07c5-4417-9be2-574fbca9ddbb" (UID: "13827989-07c5-4417-9be2-574fbca9ddbb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.921982 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-4fvnn"] Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.927558 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-scripts" (OuterVolumeSpecName: "scripts") pod "13827989-07c5-4417-9be2-574fbca9ddbb" (UID: "13827989-07c5-4417-9be2-574fbca9ddbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:52 crc kubenswrapper[4834]: I0121 14:52:52.991887 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13827989-07c5-4417-9be2-574fbca9ddbb" (UID: "13827989-07c5-4417-9be2-574fbca9ddbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.018081 4834 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.018369 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6v6k\" (UniqueName: \"kubernetes.io/projected/13827989-07c5-4417-9be2-574fbca9ddbb-kube-api-access-g6v6k\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.018443 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.018501 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.018588 4834 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13827989-07c5-4417-9be2-574fbca9ddbb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.070489 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-87c79555d-cxwgx"] Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.095412 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-config-data" (OuterVolumeSpecName: "config-data") pod "13827989-07c5-4417-9be2-574fbca9ddbb" (UID: "13827989-07c5-4417-9be2-574fbca9ddbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.120611 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13827989-07c5-4417-9be2-574fbca9ddbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.275472 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" event={"ID":"67cf94c8-2d73-4940-a873-775f2cba8ce5","Type":"ContainerStarted","Data":"6a5d422c1b706b8b694f3504637d43e6f9b38d6d5d9955fb7f5dcba94bd7480e"} Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.280852 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" event={"ID":"7bd333c9-53d0-425c-9b3e-b2dff073b7e2","Type":"ContainerStarted","Data":"7c7bcb37d96a7d651e82da7218f173c2f43e217dac0e5225c2fc64c6e892b755"} Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.284970 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x8ctw" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.286509 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x8ctw" event={"ID":"13827989-07c5-4417-9be2-574fbca9ddbb","Type":"ContainerDied","Data":"69712b0520b8346fab377f4f1fedc16b5f1531a7ac0990161eb74f41e736e2be"} Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.286818 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69712b0520b8346fab377f4f1fedc16b5f1531a7ac0990161eb74f41e736e2be" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.328997 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c79555d-cxwgx" event={"ID":"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e","Type":"ContainerStarted","Data":"8bdbb65d28c073a9d28c95f96fd3a5259228e31c75cc66f965eef6d06284c3a4"} Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.331087 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" event={"ID":"84309501-c399-4d83-9876-00b58ba67b0d","Type":"ContainerStarted","Data":"8efef12bdabeffe63c9009bb5dce3f5a5ad7667c43a921d22db342d8beb3c8f7"} Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.333526 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2","Type":"ContainerStarted","Data":"0f5fc260c6a469bcc4cdc61cdc2a3b5d5a6c9b4831e58f81058dbba133df5a24"} Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.615401 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:52:53 crc kubenswrapper[4834]: E0121 14:52:53.615844 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13827989-07c5-4417-9be2-574fbca9ddbb" containerName="cinder-db-sync" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.615856 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="13827989-07c5-4417-9be2-574fbca9ddbb" containerName="cinder-db-sync" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.616105 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="13827989-07c5-4417-9be2-574fbca9ddbb" containerName="cinder-db-sync" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.617540 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.621916 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.622149 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qhxjf" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.625674 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.626281 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.661058 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.705259 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-4fvnn"] Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.731684 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-5qxxk"] Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.733847 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.747182 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.747851 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.747872 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.747944 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-scripts\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.747988 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrqh\" (UniqueName: \"kubernetes.io/projected/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-kube-api-access-slrqh\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.748016 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.761657 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-5qxxk"] Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859249 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859406 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859438 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859485 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-config\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859545 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-scripts\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859574 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859623 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859652 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slrqh\" (UniqueName: \"kubernetes.io/projected/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-kube-api-access-slrqh\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859681 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj2jr\" (UniqueName: \"kubernetes.io/projected/c644157e-579c-4348-bc28-fd5a273dfb02-kube-api-access-cj2jr\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.859712 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.860859 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.861676 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.861733 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.867608 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-scripts\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.873716 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.878140 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.902956 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.916560 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrqh\" (UniqueName: \"kubernetes.io/projected/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-kube-api-access-slrqh\") pod \"cinder-scheduler-0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " pod="openstack/cinder-scheduler-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.928156 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.929913 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.943119 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.951922 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.967367 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-config\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.967437 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.967475 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.967498 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj2jr\" (UniqueName: \"kubernetes.io/projected/c644157e-579c-4348-bc28-fd5a273dfb02-kube-api-access-cj2jr\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.967525 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.967543 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.968470 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.969093 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-config\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.969692 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.970286 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.971155 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:53 crc kubenswrapper[4834]: I0121 14:52:53.977726 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.004777 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj2jr\" (UniqueName: \"kubernetes.io/projected/c644157e-579c-4348-bc28-fd5a273dfb02-kube-api-access-cj2jr\") pod \"dnsmasq-dns-75dbb546bf-5qxxk\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.069629 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.069738 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-logs\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.069833 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-scripts\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.069948 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.070039 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.070255 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj4s2\" (UniqueName: \"kubernetes.io/projected/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-kube-api-access-dj4s2\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.070321 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data-custom\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.070763 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.172548 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.172629 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-logs\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.172654 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-scripts\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.172681 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.172722 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.172795 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj4s2\" (UniqueName: \"kubernetes.io/projected/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-kube-api-access-dj4s2\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.172821 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data-custom\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.177503 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data-custom\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.178771 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-scripts\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.179239 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-logs\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.179300 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.181335 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.182400 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.202980 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj4s2\" (UniqueName: \"kubernetes.io/projected/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-kube-api-access-dj4s2\") pod \"cinder-api-0\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.356899 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.470778 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c79555d-cxwgx" event={"ID":"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e","Type":"ContainerStarted","Data":"e95c53cf888d112fdbdbde8f0fcfaaa4a86cf83cf3a1f38b830150e00835be97"} Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.470819 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c79555d-cxwgx" event={"ID":"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e","Type":"ContainerStarted","Data":"e5456c5eea9d230570ec458b5098a532062a787b2e38ed573fb4c4efa88da65e"} Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.470867 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.470891 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.479197 4834 generic.go:334] "Generic (PLEG): container finished" podID="7bd333c9-53d0-425c-9b3e-b2dff073b7e2" containerID="4d6fbb84eb371a752d41d29be6f22a4ba4a58bb87417f54a0d562b59b2b8345d" exitCode=0 Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.479144 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" event={"ID":"7bd333c9-53d0-425c-9b3e-b2dff073b7e2","Type":"ContainerDied","Data":"4d6fbb84eb371a752d41d29be6f22a4ba4a58bb87417f54a0d562b59b2b8345d"} Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.744375 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-87c79555d-cxwgx" podStartSLOduration=3.74433032 podStartE2EDuration="3.74433032s" podCreationTimestamp="2026-01-21 14:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:52:54.638788353 +0000 UTC m=+1320.613137398" watchObservedRunningTime="2026-01-21 14:52:54.74433032 +0000 UTC m=+1320.718679375" Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.770020 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:52:54 crc kubenswrapper[4834]: I0121 14:52:54.975550 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-5qxxk"] Jan 21 14:52:55 crc kubenswrapper[4834]: I0121 14:52:55.180334 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:52:55 crc kubenswrapper[4834]: I0121 14:52:55.498856 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0","Type":"ContainerStarted","Data":"8bfc065ed72c9308c2845e76f3accbf3de0333321df08fc66a1efcca721d93bb"} Jan 21 14:52:55 crc kubenswrapper[4834]: E0121 14:52:55.679176 4834 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 21 14:52:55 crc kubenswrapper[4834]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/7bd333c9-53d0-425c-9b3e-b2dff073b7e2/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 21 14:52:55 crc kubenswrapper[4834]: > podSandboxID="7c7bcb37d96a7d651e82da7218f173c2f43e217dac0e5225c2fc64c6e892b755" Jan 21 14:52:55 crc kubenswrapper[4834]: E0121 14:52:55.679396 4834 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:52:55 crc kubenswrapper[4834]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch57ch5c5hcch589hf7h577h659h96h5c8h5b4h55fhbbh667h565h5bchcbh58dh7dh5bch586h56ch574h598h67dh5c8h56dh8bh574h564hbch7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dd6nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-66cdd4b5b5-4fvnn_openstack(7bd333c9-53d0-425c-9b3e-b2dff073b7e2): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/7bd333c9-53d0-425c-9b3e-b2dff073b7e2/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 21 14:52:55 crc kubenswrapper[4834]: > logger="UnhandledError" Jan 21 14:52:55 crc kubenswrapper[4834]: E0121 14:52:55.680877 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/7bd333c9-53d0-425c-9b3e-b2dff073b7e2/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" podUID="7bd333c9-53d0-425c-9b3e-b2dff073b7e2" Jan 21 14:52:55 crc kubenswrapper[4834]: W0121 14:52:55.717850 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc644157e_579c_4348_bc28_fd5a273dfb02.slice/crio-b6b43574cc5d2868a3bb56e2c6778bff68ea0e86f068f673d7dbec0cb2615052 WatchSource:0}: Error finding container b6b43574cc5d2868a3bb56e2c6778bff68ea0e86f068f673d7dbec0cb2615052: Status 404 returned error can't find the container with id b6b43574cc5d2868a3bb56e2c6778bff68ea0e86f068f673d7dbec0cb2615052 Jan 21 14:52:56 crc kubenswrapper[4834]: I0121 14:52:56.529119 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" event={"ID":"c644157e-579c-4348-bc28-fd5a273dfb02","Type":"ContainerStarted","Data":"b6b43574cc5d2868a3bb56e2c6778bff68ea0e86f068f673d7dbec0cb2615052"} Jan 21 14:52:56 crc kubenswrapper[4834]: I0121 14:52:56.541294 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f","Type":"ContainerStarted","Data":"8b7abf46897711fcb2a7bb9fc1c10de568ad21e4e75275318f58fbc3786bae0c"} Jan 21 14:52:56 crc kubenswrapper[4834]: I0121 14:52:56.858422 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.093393 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.190226 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-nb\") pod \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.190292 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-sb\") pod \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.190318 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd6nh\" (UniqueName: \"kubernetes.io/projected/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-kube-api-access-dd6nh\") pod \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.190509 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-svc\") pod \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.190529 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-config\") pod \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.190606 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-swift-storage-0\") pod \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\" (UID: \"7bd333c9-53d0-425c-9b3e-b2dff073b7e2\") " Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.199287 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-kube-api-access-dd6nh" (OuterVolumeSpecName: "kube-api-access-dd6nh") pod "7bd333c9-53d0-425c-9b3e-b2dff073b7e2" (UID: "7bd333c9-53d0-425c-9b3e-b2dff073b7e2"). InnerVolumeSpecName "kube-api-access-dd6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.304251 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7bd333c9-53d0-425c-9b3e-b2dff073b7e2" (UID: "7bd333c9-53d0-425c-9b3e-b2dff073b7e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.331722 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.331767 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd6nh\" (UniqueName: \"kubernetes.io/projected/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-kube-api-access-dd6nh\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.333352 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7bd333c9-53d0-425c-9b3e-b2dff073b7e2" (UID: "7bd333c9-53d0-425c-9b3e-b2dff073b7e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.339967 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7bd333c9-53d0-425c-9b3e-b2dff073b7e2" (UID: "7bd333c9-53d0-425c-9b3e-b2dff073b7e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.356024 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-config" (OuterVolumeSpecName: "config") pod "7bd333c9-53d0-425c-9b3e-b2dff073b7e2" (UID: "7bd333c9-53d0-425c-9b3e-b2dff073b7e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.359447 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7bd333c9-53d0-425c-9b3e-b2dff073b7e2" (UID: "7bd333c9-53d0-425c-9b3e-b2dff073b7e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.434566 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.434602 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.434645 4834 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.434721 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7bd333c9-53d0-425c-9b3e-b2dff073b7e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.563322 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" event={"ID":"67cf94c8-2d73-4940-a873-775f2cba8ce5","Type":"ContainerStarted","Data":"e22110aadbd884384da0840ce5e19cbdcf2bf4d1b9fffe3c6126085fdedeb6df"} Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.567130 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" event={"ID":"7bd333c9-53d0-425c-9b3e-b2dff073b7e2","Type":"ContainerDied","Data":"7c7bcb37d96a7d651e82da7218f173c2f43e217dac0e5225c2fc64c6e892b755"} Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.567186 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-4fvnn" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.567706 4834 scope.go:117] "RemoveContainer" containerID="4d6fbb84eb371a752d41d29be6f22a4ba4a58bb87417f54a0d562b59b2b8345d" Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.574199 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" event={"ID":"84309501-c399-4d83-9876-00b58ba67b0d","Type":"ContainerStarted","Data":"9bf4c21b00ddd9eebc7d0010a2d80343120a83ac8f09fe51905c76215b321ac0"} Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.579842 4834 generic.go:334] "Generic (PLEG): container finished" podID="c644157e-579c-4348-bc28-fd5a273dfb02" containerID="ef3d2bfc377dd1ff0e431c35ee333e5a95afbaa03c3d40056e31da206ecdd416" exitCode=0 Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.579888 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" event={"ID":"c644157e-579c-4348-bc28-fd5a273dfb02","Type":"ContainerDied","Data":"ef3d2bfc377dd1ff0e431c35ee333e5a95afbaa03c3d40056e31da206ecdd416"} Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.759488 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-4fvnn"] Jan 21 14:52:57 crc kubenswrapper[4834]: I0121 14:52:57.774285 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-4fvnn"] Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.236637 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-56587d777c-2rx88"] Jan 21 14:52:58 crc kubenswrapper[4834]: E0121 14:52:58.238973 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd333c9-53d0-425c-9b3e-b2dff073b7e2" containerName="init" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.239014 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd333c9-53d0-425c-9b3e-b2dff073b7e2" containerName="init" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.239272 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd333c9-53d0-425c-9b3e-b2dff073b7e2" containerName="init" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.246339 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.252383 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.252661 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.254287 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.259609 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56587d777c-2rx88"] Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.352185 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd333c9-53d0-425c-9b3e-b2dff073b7e2" path="/var/lib/kubelet/pods/7bd333c9-53d0-425c-9b3e-b2dff073b7e2/volumes" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.354026 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-config-data\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.354063 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-internal-tls-certs\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.354097 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zhk\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-kube-api-access-48zhk\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.354205 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-combined-ca-bundle\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.354229 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-run-httpd\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.354248 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-log-httpd\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.354312 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-etc-swift\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.354329 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-public-tls-certs\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.458391 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-combined-ca-bundle\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.458452 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-run-httpd\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.458484 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-log-httpd\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.458510 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-etc-swift\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.458526 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-public-tls-certs\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.458546 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-internal-tls-certs\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.458563 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-config-data\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.458607 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48zhk\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-kube-api-access-48zhk\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.459664 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-run-httpd\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.460399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-log-httpd\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.468861 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-combined-ca-bundle\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.469893 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-config-data\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.470319 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-internal-tls-certs\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.471409 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-public-tls-certs\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.479279 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-etc-swift\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.489448 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48zhk\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-kube-api-access-48zhk\") pod \"swift-proxy-56587d777c-2rx88\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.511456 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6f6cfdb85b-jvqw8"] Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.513615 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.516331 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.519711 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.529938 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f6cfdb85b-jvqw8"] Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.561419 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-combined-ca-bundle\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.561510 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/4e4b8c88-31ca-4212-939c-9e163ff6af52-kube-api-access-nfmmt\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.561556 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data-custom\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.561581 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b8c88-31ca-4212-939c-9e163ff6af52-logs\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.561631 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-public-tls-certs\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.561662 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.561684 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-internal-tls-certs\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.611520 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" event={"ID":"67cf94c8-2d73-4940-a873-775f2cba8ce5","Type":"ContainerStarted","Data":"52bc71d8458ec442f5c03fd42e72e4aee5dfcd1add1d1cd4d9a3fbb418f22b46"} Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.612680 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.632251 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" event={"ID":"84309501-c399-4d83-9876-00b58ba67b0d","Type":"ContainerStarted","Data":"b0fceacb1c47e8063d1e7c22118f445dc2ab5d3565c665ac3bb3f6be80d3738d"} Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.638740 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" podStartSLOduration=3.502103877 podStartE2EDuration="7.638718926s" podCreationTimestamp="2026-01-21 14:52:51 +0000 UTC" firstStartedPulling="2026-01-21 14:52:52.593621462 +0000 UTC m=+1318.567970507" lastFinishedPulling="2026-01-21 14:52:56.730236511 +0000 UTC m=+1322.704585556" observedRunningTime="2026-01-21 14:52:58.633431162 +0000 UTC m=+1324.607780227" watchObservedRunningTime="2026-01-21 14:52:58.638718926 +0000 UTC m=+1324.613067961" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.664310 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data-custom\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.664366 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b8c88-31ca-4212-939c-9e163ff6af52-logs\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.664432 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-public-tls-certs\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.664462 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.664483 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-internal-tls-certs\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.664578 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-combined-ca-bundle\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.664623 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/4e4b8c88-31ca-4212-939c-9e163ff6af52-kube-api-access-nfmmt\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.668683 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b8c88-31ca-4212-939c-9e163ff6af52-logs\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.671092 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data-custom\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.674440 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.678104 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-combined-ca-bundle\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.679303 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-internal-tls-certs\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.698457 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/4e4b8c88-31ca-4212-939c-9e163ff6af52-kube-api-access-nfmmt\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.704641 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-public-tls-certs\") pod \"barbican-api-6f6cfdb85b-jvqw8\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:58 crc kubenswrapper[4834]: I0121 14:52:58.901662 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:52:59 crc kubenswrapper[4834]: I0121 14:52:59.499002 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56587d777c-2rx88"] Jan 21 14:52:59 crc kubenswrapper[4834]: I0121 14:52:59.511085 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6f6cfdb85b-jvqw8"] Jan 21 14:52:59 crc kubenswrapper[4834]: I0121 14:52:59.674843 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" event={"ID":"4e4b8c88-31ca-4212-939c-9e163ff6af52","Type":"ContainerStarted","Data":"c5f1cf2cb5e4e774781c7c94bb7c3dd49bca527b68353a9f286936f373836cbe"} Jan 21 14:52:59 crc kubenswrapper[4834]: I0121 14:52:59.676573 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56587d777c-2rx88" event={"ID":"725f14ad-f7a0-4d41-813e-19161c405300","Type":"ContainerStarted","Data":"96b3563f0bfbe774b9bce480c3e7fd09de71973ab324c248998a4b75431a80ea"} Jan 21 14:53:00 crc kubenswrapper[4834]: I0121 14:53:00.450633 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:00 crc kubenswrapper[4834]: I0121 14:53:00.452012 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="ceilometer-central-agent" containerID="cri-o://dc169311afe3db8ca15db4d240544cf54976b924f04fb5da4fff06947d2e0efa" gracePeriod=30 Jan 21 14:53:00 crc kubenswrapper[4834]: I0121 14:53:00.453643 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="sg-core" containerID="cri-o://f5c19b7aae428c88697dd775ffecf53214835dd11d9d1722c0805fa542895d55" gracePeriod=30 Jan 21 14:53:00 crc kubenswrapper[4834]: I0121 14:53:00.454026 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="proxy-httpd" containerID="cri-o://95707426bce03d4a84e74565e8448046ded898ff61888ea4ee07087e4b80938e" gracePeriod=30 Jan 21 14:53:00 crc kubenswrapper[4834]: I0121 14:53:00.454065 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="ceilometer-notification-agent" containerID="cri-o://ac92aa1ee28b63ff7a8f90fadfabf79c9231af0774d8143c0f746ce691c16513" gracePeriod=30 Jan 21 14:53:00 crc kubenswrapper[4834]: I0121 14:53:00.476811 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 21 14:53:00 crc kubenswrapper[4834]: I0121 14:53:00.694151 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f","Type":"ContainerStarted","Data":"81d5bf68ead6fda2d10dc0336cb58d2681b246c03a7165af1afa3d1127ec6435"} Jan 21 14:53:03 crc kubenswrapper[4834]: I0121 14:53:03.256472 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-87c79555d-cxwgx" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:53:03 crc kubenswrapper[4834]: I0121 14:53:03.256522 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-87c79555d-cxwgx" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:53:04 crc kubenswrapper[4834]: I0121 14:53:04.875846 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:53:04 crc kubenswrapper[4834]: I0121 14:53:04.888614 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.256592 4834 generic.go:334] "Generic (PLEG): container finished" podID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerID="95707426bce03d4a84e74565e8448046ded898ff61888ea4ee07087e4b80938e" exitCode=0 Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.256632 4834 generic.go:334] "Generic (PLEG): container finished" podID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerID="f5c19b7aae428c88697dd775ffecf53214835dd11d9d1722c0805fa542895d55" exitCode=2 Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.256643 4834 generic.go:334] "Generic (PLEG): container finished" podID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerID="dc169311afe3db8ca15db4d240544cf54976b924f04fb5da4fff06947d2e0efa" exitCode=0 Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.256693 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerDied","Data":"95707426bce03d4a84e74565e8448046ded898ff61888ea4ee07087e4b80938e"} Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.256723 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerDied","Data":"f5c19b7aae428c88697dd775ffecf53214835dd11d9d1722c0805fa542895d55"} Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.256732 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerDied","Data":"dc169311afe3db8ca15db4d240544cf54976b924f04fb5da4fff06947d2e0efa"} Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.270208 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0","Type":"ContainerStarted","Data":"ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b"} Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.310621 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" event={"ID":"c644157e-579c-4348-bc28-fd5a273dfb02","Type":"ContainerStarted","Data":"2010e57c123057da82a035341d5a54788d0f01f74d18493105f4fc85acbfdd92"} Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.310676 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.378492 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" podStartSLOduration=10.360943872 podStartE2EDuration="14.378471892s" podCreationTimestamp="2026-01-21 14:52:51 +0000 UTC" firstStartedPulling="2026-01-21 14:52:52.71174566 +0000 UTC m=+1318.686094705" lastFinishedPulling="2026-01-21 14:52:56.72927368 +0000 UTC m=+1322.703622725" observedRunningTime="2026-01-21 14:53:05.34558892 +0000 UTC m=+1331.319937965" watchObservedRunningTime="2026-01-21 14:53:05.378471892 +0000 UTC m=+1331.352820937" Jan 21 14:53:05 crc kubenswrapper[4834]: I0121 14:53:05.387636 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" podStartSLOduration=12.387611335 podStartE2EDuration="12.387611335s" podCreationTimestamp="2026-01-21 14:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:05.375119968 +0000 UTC m=+1331.349469033" watchObservedRunningTime="2026-01-21 14:53:05.387611335 +0000 UTC m=+1331.361960380" Jan 21 14:53:06 crc kubenswrapper[4834]: I0121 14:53:06.321565 4834 generic.go:334] "Generic (PLEG): container finished" podID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerID="ac92aa1ee28b63ff7a8f90fadfabf79c9231af0774d8143c0f746ce691c16513" exitCode=0 Jan 21 14:53:06 crc kubenswrapper[4834]: I0121 14:53:06.322038 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerDied","Data":"ac92aa1ee28b63ff7a8f90fadfabf79c9231af0774d8143c0f746ce691c16513"} Jan 21 14:53:06 crc kubenswrapper[4834]: I0121 14:53:06.336653 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" event={"ID":"4e4b8c88-31ca-4212-939c-9e163ff6af52","Type":"ContainerStarted","Data":"d0842ea9c1eb4c9eb1b4ebac20aeac250fca82a332954d0e335a267a9f99c8f0"} Jan 21 14:53:06 crc kubenswrapper[4834]: I0121 14:53:06.336698 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56587d777c-2rx88" event={"ID":"725f14ad-f7a0-4d41-813e-19161c405300","Type":"ContainerStarted","Data":"ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49"} Jan 21 14:53:09 crc kubenswrapper[4834]: I0121 14:53:09.076256 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:53:09 crc kubenswrapper[4834]: I0121 14:53:09.157679 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-qth6j"] Jan 21 14:53:09 crc kubenswrapper[4834]: I0121 14:53:09.160351 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685444497c-qth6j" podUID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerName="dnsmasq-dns" containerID="cri-o://3a0ffcd8d5017104752144e14cf4b864e0f28f97ab625a5895865b99a01ff19d" gracePeriod=10 Jan 21 14:53:10 crc kubenswrapper[4834]: I0121 14:53:10.622090 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-685444497c-qth6j" podUID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Jan 21 14:53:11 crc kubenswrapper[4834]: I0121 14:53:11.412490 4834 generic.go:334] "Generic (PLEG): container finished" podID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerID="3a0ffcd8d5017104752144e14cf4b864e0f28f97ab625a5895865b99a01ff19d" exitCode=0 Jan 21 14:53:11 crc kubenswrapper[4834]: I0121 14:53:11.412549 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-qth6j" event={"ID":"757e81d8-6937-4a5a-8ebb-8af24d465dc7","Type":"ContainerDied","Data":"3a0ffcd8d5017104752144e14cf4b864e0f28f97ab625a5895865b99a01ff19d"} Jan 21 14:53:15 crc kubenswrapper[4834]: I0121 14:53:15.622445 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-685444497c-qth6j" podUID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Jan 21 14:53:15 crc kubenswrapper[4834]: I0121 14:53:15.746391 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.114295 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.114718 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.114774 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.115694 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6dba53c679c40632f6791fadae8f2aac4acf2c5613e03fc58ea292250e912986"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.115759 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://6dba53c679c40632f6791fadae8f2aac4acf2c5613e03fc58ea292250e912986" gracePeriod=600 Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.486493 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="6dba53c679c40632f6791fadae8f2aac4acf2c5613e03fc58ea292250e912986" exitCode=0 Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.486573 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"6dba53c679c40632f6791fadae8f2aac4acf2c5613e03fc58ea292250e912986"} Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.487071 4834 scope.go:117] "RemoveContainer" containerID="c24a029b19e48e3cf137efc10e8a62368e95d300f53e096d99dddcd0c8a5d0a8" Jan 21 14:53:17 crc kubenswrapper[4834]: E0121 14:53:17.487087 4834 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944" Jan 21 14:53:17 crc kubenswrapper[4834]: E0121 14:53:17.487345 4834 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n687h5fh5f9h54dh58ch64dh6ch68ch59bh5f5h586hbh5dfh6fh89h575hb8h665hcbh549h654h55dh97hf7h6ch5bh568h697h55h5fbh5fdh58bq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfwzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 14:53:17 crc kubenswrapper[4834]: E0121 14:53:17.488843 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.771994 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.897621 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.919819 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-scripts\") pod \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.920231 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-combined-ca-bundle\") pod \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.920269 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-log-httpd\") pod \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.920339 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-config-data\") pod \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.920429 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-sg-core-conf-yaml\") pod \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.920532 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29jvv\" (UniqueName: \"kubernetes.io/projected/9b7d697c-0f23-4ea5-b8eb-2735d019c579-kube-api-access-29jvv\") pod \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.920564 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-run-httpd\") pod \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\" (UID: \"9b7d697c-0f23-4ea5-b8eb-2735d019c579\") " Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.921495 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9b7d697c-0f23-4ea5-b8eb-2735d019c579" (UID: "9b7d697c-0f23-4ea5-b8eb-2735d019c579"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.932456 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9b7d697c-0f23-4ea5-b8eb-2735d019c579" (UID: "9b7d697c-0f23-4ea5-b8eb-2735d019c579"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.941427 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-scripts" (OuterVolumeSpecName: "scripts") pod "9b7d697c-0f23-4ea5-b8eb-2735d019c579" (UID: "9b7d697c-0f23-4ea5-b8eb-2735d019c579"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.942782 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7d697c-0f23-4ea5-b8eb-2735d019c579-kube-api-access-29jvv" (OuterVolumeSpecName: "kube-api-access-29jvv") pod "9b7d697c-0f23-4ea5-b8eb-2735d019c579" (UID: "9b7d697c-0f23-4ea5-b8eb-2735d019c579"). InnerVolumeSpecName "kube-api-access-29jvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:17 crc kubenswrapper[4834]: I0121 14:53:17.970241 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9b7d697c-0f23-4ea5-b8eb-2735d019c579" (UID: "9b7d697c-0f23-4ea5-b8eb-2735d019c579"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.024652 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-svc\") pod \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.024715 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9n99\" (UniqueName: \"kubernetes.io/projected/757e81d8-6937-4a5a-8ebb-8af24d465dc7-kube-api-access-x9n99\") pod \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.024789 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-nb\") pod \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.024877 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-config\") pod \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.024905 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-sb\") pod \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.024962 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-swift-storage-0\") pod \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\" (UID: \"757e81d8-6937-4a5a-8ebb-8af24d465dc7\") " Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.033393 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.033433 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.033442 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b7d697c-0f23-4ea5-b8eb-2735d019c579-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.033451 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.033468 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29jvv\" (UniqueName: \"kubernetes.io/projected/9b7d697c-0f23-4ea5-b8eb-2735d019c579-kube-api-access-29jvv\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.069112 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b7d697c-0f23-4ea5-b8eb-2735d019c579" (UID: "9b7d697c-0f23-4ea5-b8eb-2735d019c579"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.069864 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757e81d8-6937-4a5a-8ebb-8af24d465dc7-kube-api-access-x9n99" (OuterVolumeSpecName: "kube-api-access-x9n99") pod "757e81d8-6937-4a5a-8ebb-8af24d465dc7" (UID: "757e81d8-6937-4a5a-8ebb-8af24d465dc7"). InnerVolumeSpecName "kube-api-access-x9n99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.115764 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "757e81d8-6937-4a5a-8ebb-8af24d465dc7" (UID: "757e81d8-6937-4a5a-8ebb-8af24d465dc7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.121788 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-config-data" (OuterVolumeSpecName: "config-data") pod "9b7d697c-0f23-4ea5-b8eb-2735d019c579" (UID: "9b7d697c-0f23-4ea5-b8eb-2735d019c579"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.132621 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "757e81d8-6937-4a5a-8ebb-8af24d465dc7" (UID: "757e81d8-6937-4a5a-8ebb-8af24d465dc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.136114 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.139247 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.139410 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7d697c-0f23-4ea5-b8eb-2735d019c579-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.139511 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.139597 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9n99\" (UniqueName: \"kubernetes.io/projected/757e81d8-6937-4a5a-8ebb-8af24d465dc7-kube-api-access-x9n99\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.144574 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-config" (OuterVolumeSpecName: "config") pod "757e81d8-6937-4a5a-8ebb-8af24d465dc7" (UID: "757e81d8-6937-4a5a-8ebb-8af24d465dc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.156308 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "757e81d8-6937-4a5a-8ebb-8af24d465dc7" (UID: "757e81d8-6937-4a5a-8ebb-8af24d465dc7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.173498 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "757e81d8-6937-4a5a-8ebb-8af24d465dc7" (UID: "757e81d8-6937-4a5a-8ebb-8af24d465dc7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.241120 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.241175 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.241186 4834 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/757e81d8-6937-4a5a-8ebb-8af24d465dc7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.499098 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551"} Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.502759 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56587d777c-2rx88" event={"ID":"725f14ad-f7a0-4d41-813e-19161c405300","Type":"ContainerStarted","Data":"f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5"} Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.503035 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.503059 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.510676 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.510822 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b7d697c-0f23-4ea5-b8eb-2735d019c579","Type":"ContainerDied","Data":"ff2b9e238339f3be70587acb0ee8ed13b55bee934608b7146b3ac8d3983abe6b"} Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.510905 4834 scope.go:117] "RemoveContainer" containerID="95707426bce03d4a84e74565e8448046ded898ff61888ea4ee07087e4b80938e" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.513589 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-56587d777c-2rx88" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.529955 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-qth6j" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.530256 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-qth6j" event={"ID":"757e81d8-6937-4a5a-8ebb-8af24d465dc7","Type":"ContainerDied","Data":"5309775e84c8ecc11fa7dd8286e72b861a0d691459ec6e4872b37bd7c5bec3dc"} Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.534896 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f","Type":"ContainerStarted","Data":"8ae7f39ebb534c5f7c768a72013ee67979250d0b179b0edb24833b49e7b984c7"} Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.545463 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" event={"ID":"4e4b8c88-31ca-4212-939c-9e163ff6af52","Type":"ContainerStarted","Data":"19ad5e1d36dc4cca92dc2d55710cdc46c1e446991cfdd3c7f69a92b2d721c0f8"} Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.546186 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.546217 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:53:18 crc kubenswrapper[4834]: E0121 14:53:18.550618 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944\\\"\"" pod="openstack/openstackclient" podUID="67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.588908 4834 scope.go:117] "RemoveContainer" containerID="f5c19b7aae428c88697dd775ffecf53214835dd11d9d1722c0805fa542895d55" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.589029 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-56587d777c-2rx88" podStartSLOduration=20.589014071 podStartE2EDuration="20.589014071s" podCreationTimestamp="2026-01-21 14:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:18.564701786 +0000 UTC m=+1344.539050831" watchObservedRunningTime="2026-01-21 14:53:18.589014071 +0000 UTC m=+1344.563363116" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.607084 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-qth6j"] Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.619970 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685444497c-qth6j"] Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.626913 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" podStartSLOduration=20.626891006 podStartE2EDuration="20.626891006s" podCreationTimestamp="2026-01-21 14:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:18.605365188 +0000 UTC m=+1344.579714243" watchObservedRunningTime="2026-01-21 14:53:18.626891006 +0000 UTC m=+1344.601240051" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.628756 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-56587d777c-2rx88" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.634164 4834 scope.go:117] "RemoveContainer" containerID="ac92aa1ee28b63ff7a8f90fadfabf79c9231af0774d8143c0f746ce691c16513" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.644370 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.654111 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.666569 4834 scope.go:117] "RemoveContainer" containerID="dc169311afe3db8ca15db4d240544cf54976b924f04fb5da4fff06947d2e0efa" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.701139 4834 scope.go:117] "RemoveContainer" containerID="3a0ffcd8d5017104752144e14cf4b864e0f28f97ab625a5895865b99a01ff19d" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.721418 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:18 crc kubenswrapper[4834]: E0121 14:53:18.722053 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerName="dnsmasq-dns" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722070 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerName="dnsmasq-dns" Jan 21 14:53:18 crc kubenswrapper[4834]: E0121 14:53:18.722090 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="sg-core" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722098 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="sg-core" Jan 21 14:53:18 crc kubenswrapper[4834]: E0121 14:53:18.722112 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerName="init" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722118 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerName="init" Jan 21 14:53:18 crc kubenswrapper[4834]: E0121 14:53:18.722134 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="ceilometer-central-agent" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722140 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="ceilometer-central-agent" Jan 21 14:53:18 crc kubenswrapper[4834]: E0121 14:53:18.722154 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="ceilometer-notification-agent" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722159 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="ceilometer-notification-agent" Jan 21 14:53:18 crc kubenswrapper[4834]: E0121 14:53:18.722180 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="proxy-httpd" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722186 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="proxy-httpd" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722390 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" containerName="dnsmasq-dns" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722419 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="proxy-httpd" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722433 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="sg-core" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722448 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="ceilometer-central-agent" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.722462 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="ceilometer-notification-agent" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.725666 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.730687 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.731466 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.734548 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.748241 4834 scope.go:117] "RemoveContainer" containerID="201d6ae3b6cfa218d257559e9de5bc47af9c54e22c295e20c2930d30dca03f6d" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.855557 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.855915 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls8vk\" (UniqueName: \"kubernetes.io/projected/aa61cdda-76e7-42fe-b399-6f7cfda22356-kube-api-access-ls8vk\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.856120 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-log-httpd\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.856306 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-run-httpd\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.856471 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.856624 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-scripts\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.856750 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-config-data\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.958263 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-run-httpd\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.958371 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.958418 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-scripts\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.958444 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-config-data\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.958516 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.958546 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls8vk\" (UniqueName: \"kubernetes.io/projected/aa61cdda-76e7-42fe-b399-6f7cfda22356-kube-api-access-ls8vk\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.958586 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-log-httpd\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.959072 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-run-httpd\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.960294 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-log-httpd\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.966788 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-scripts\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.966917 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.967804 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-config-data\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.970117 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:18 crc kubenswrapper[4834]: I0121 14:53:18.979112 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls8vk\" (UniqueName: \"kubernetes.io/projected/aa61cdda-76e7-42fe-b399-6f7cfda22356-kube-api-access-ls8vk\") pod \"ceilometer-0\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " pod="openstack/ceilometer-0" Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.100909 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.376503 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.482380 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f9dd849bb-b4xzs"] Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.482635 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f9dd849bb-b4xzs" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-api" containerID="cri-o://65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe" gracePeriod=30 Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.483086 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f9dd849bb-b4xzs" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-httpd" containerID="cri-o://e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777" gracePeriod=30 Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.604189 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0","Type":"ContainerStarted","Data":"c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c"} Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.604249 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerName="cinder-api-log" containerID="cri-o://81d5bf68ead6fda2d10dc0336cb58d2681b246c03a7165af1afa3d1127ec6435" gracePeriod=30 Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.605527 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.605579 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerName="cinder-api" containerID="cri-o://8ae7f39ebb534c5f7c768a72013ee67979250d0b179b0edb24833b49e7b984c7" gracePeriod=30 Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.673491 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.705742 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=26.705714908 podStartE2EDuration="26.705714908s" podCreationTimestamp="2026-01-21 14:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:19.635796337 +0000 UTC m=+1345.610145382" watchObservedRunningTime="2026-01-21 14:53:19.705714908 +0000 UTC m=+1345.680063953" Jan 21 14:53:19 crc kubenswrapper[4834]: I0121 14:53:19.740868 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=24.78104364 podStartE2EDuration="26.740838909s" podCreationTimestamp="2026-01-21 14:52:53 +0000 UTC" firstStartedPulling="2026-01-21 14:52:54.775492188 +0000 UTC m=+1320.749841233" lastFinishedPulling="2026-01-21 14:52:56.735287467 +0000 UTC m=+1322.709636502" observedRunningTime="2026-01-21 14:53:19.665725096 +0000 UTC m=+1345.640074151" watchObservedRunningTime="2026-01-21 14:53:19.740838909 +0000 UTC m=+1345.715187954" Jan 21 14:53:20 crc kubenswrapper[4834]: I0121 14:53:20.126234 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:20 crc kubenswrapper[4834]: I0121 14:53:20.338682 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757e81d8-6937-4a5a-8ebb-8af24d465dc7" path="/var/lib/kubelet/pods/757e81d8-6937-4a5a-8ebb-8af24d465dc7/volumes" Jan 21 14:53:20 crc kubenswrapper[4834]: I0121 14:53:20.339435 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" path="/var/lib/kubelet/pods/9b7d697c-0f23-4ea5-b8eb-2735d019c579/volumes" Jan 21 14:53:20 crc kubenswrapper[4834]: I0121 14:53:20.623895 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerStarted","Data":"b76ed96a04d0b9b436084ce70912fc8f6ce204e246d93f8e6bf5b1741dc001dd"} Jan 21 14:53:20 crc kubenswrapper[4834]: I0121 14:53:20.637333 4834 generic.go:334] "Generic (PLEG): container finished" podID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerID="81d5bf68ead6fda2d10dc0336cb58d2681b246c03a7165af1afa3d1127ec6435" exitCode=143 Jan 21 14:53:20 crc kubenswrapper[4834]: I0121 14:53:20.637989 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f","Type":"ContainerDied","Data":"81d5bf68ead6fda2d10dc0336cb58d2681b246c03a7165af1afa3d1127ec6435"} Jan 21 14:53:20 crc kubenswrapper[4834]: I0121 14:53:20.800646 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.657859 4834 generic.go:334] "Generic (PLEG): container finished" podID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerID="8ae7f39ebb534c5f7c768a72013ee67979250d0b179b0edb24833b49e7b984c7" exitCode=0 Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.657975 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f","Type":"ContainerDied","Data":"8ae7f39ebb534c5f7c768a72013ee67979250d0b179b0edb24833b49e7b984c7"} Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.667127 4834 generic.go:334] "Generic (PLEG): container finished" podID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerID="e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777" exitCode=0 Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.667179 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9dd849bb-b4xzs" event={"ID":"b8066f26-fd07-4d6c-bd1b-44664f2a091b","Type":"ContainerDied","Data":"e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777"} Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.670688 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerStarted","Data":"812c3bd340e74d50c9d33e4f333c1e389145b76516d8d553ad34bad8cd6449a8"} Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.789193 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.977547 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-etc-machine-id\") pod \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.977704 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data-custom\") pod \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.977707 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" (UID: "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.977818 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-scripts\") pod \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.977855 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-combined-ca-bundle\") pod \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.977951 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data\") pod \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.978042 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-logs\") pod \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.979233 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj4s2\" (UniqueName: \"kubernetes.io/projected/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-kube-api-access-dj4s2\") pod \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\" (UID: \"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f\") " Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.979296 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-logs" (OuterVolumeSpecName: "logs") pod "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" (UID: "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.979879 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.979905 4834 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.987892 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-kube-api-access-dj4s2" (OuterVolumeSpecName: "kube-api-access-dj4s2") pod "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" (UID: "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f"). InnerVolumeSpecName "kube-api-access-dj4s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.988007 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-scripts" (OuterVolumeSpecName: "scripts") pod "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" (UID: "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:21 crc kubenswrapper[4834]: I0121 14:53:21.999864 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" (UID: "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.023725 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" (UID: "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.058745 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data" (OuterVolumeSpecName: "config-data") pod "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" (UID: "65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.080792 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj4s2\" (UniqueName: \"kubernetes.io/projected/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-kube-api-access-dj4s2\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.080836 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.080847 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.080855 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.080863 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.687622 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerStarted","Data":"b47ed0f6c8104a3b7953464e5976417f153bf0a91bf10723168f9b3f52ee2cdb"} Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.695053 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f","Type":"ContainerDied","Data":"8b7abf46897711fcb2a7bb9fc1c10de568ad21e4e75275318f58fbc3786bae0c"} Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.695137 4834 scope.go:117] "RemoveContainer" containerID="8ae7f39ebb534c5f7c768a72013ee67979250d0b179b0edb24833b49e7b984c7" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.695152 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.720830 4834 scope.go:117] "RemoveContainer" containerID="81d5bf68ead6fda2d10dc0336cb58d2681b246c03a7165af1afa3d1127ec6435" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.727790 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.735713 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.754735 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:22 crc kubenswrapper[4834]: E0121 14:53:22.755680 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerName="cinder-api-log" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.755713 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerName="cinder-api-log" Jan 21 14:53:22 crc kubenswrapper[4834]: E0121 14:53:22.755733 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerName="cinder-api" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.755742 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerName="cinder-api" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.755987 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerName="cinder-api-log" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.756351 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" containerName="cinder-api" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.757597 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.769467 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.769780 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.770170 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.790282 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.806196 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh7nm\" (UniqueName: \"kubernetes.io/projected/f53e7c29-7c71-4dba-8c9f-2a9accc74294-kube-api-access-dh7nm\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.806324 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.806358 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f53e7c29-7c71-4dba-8c9f-2a9accc74294-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.806431 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.806468 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.806551 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-scripts\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.806585 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.806623 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data-custom\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.806648 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53e7c29-7c71-4dba-8c9f-2a9accc74294-logs\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.911397 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh7nm\" (UniqueName: \"kubernetes.io/projected/f53e7c29-7c71-4dba-8c9f-2a9accc74294-kube-api-access-dh7nm\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.911503 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.911542 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f53e7c29-7c71-4dba-8c9f-2a9accc74294-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.911611 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.911650 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.911740 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-scripts\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.911767 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.911806 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data-custom\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.911830 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53e7c29-7c71-4dba-8c9f-2a9accc74294-logs\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.912424 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53e7c29-7c71-4dba-8c9f-2a9accc74294-logs\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.913076 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f53e7c29-7c71-4dba-8c9f-2a9accc74294-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.921152 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.921221 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-scripts\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.934113 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.934472 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data-custom\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.935012 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh7nm\" (UniqueName: \"kubernetes.io/projected/f53e7c29-7c71-4dba-8c9f-2a9accc74294-kube-api-access-dh7nm\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.940443 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:22 crc kubenswrapper[4834]: I0121 14:53:22.941117 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " pod="openstack/cinder-api-0" Jan 21 14:53:23 crc kubenswrapper[4834]: I0121 14:53:23.088214 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:53:23 crc kubenswrapper[4834]: I0121 14:53:23.632646 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:53:23 crc kubenswrapper[4834]: I0121 14:53:23.703166 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:53:23 crc kubenswrapper[4834]: I0121 14:53:23.754943 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerStarted","Data":"4a6ed2df30e010201b9a36b01247708744061f14a48a1665bb13fdeb43d9d9bb"} Jan 21 14:53:23 crc kubenswrapper[4834]: I0121 14:53:23.979165 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.338252 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f" path="/var/lib/kubelet/pods/65b5b76c-d9a4-4041-a3e3-e0f8c4088f5f/volumes" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.459771 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.487978 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.562787 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-httpd-config\") pod \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.562876 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-combined-ca-bundle\") pod \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.562957 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-config\") pod \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.562989 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9z6d\" (UniqueName: \"kubernetes.io/projected/b8066f26-fd07-4d6c-bd1b-44664f2a091b-kube-api-access-d9z6d\") pod \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.563038 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-ovndb-tls-certs\") pod \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\" (UID: \"b8066f26-fd07-4d6c-bd1b-44664f2a091b\") " Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.589899 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8066f26-fd07-4d6c-bd1b-44664f2a091b-kube-api-access-d9z6d" (OuterVolumeSpecName: "kube-api-access-d9z6d") pod "b8066f26-fd07-4d6c-bd1b-44664f2a091b" (UID: "b8066f26-fd07-4d6c-bd1b-44664f2a091b"). InnerVolumeSpecName "kube-api-access-d9z6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.590044 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b8066f26-fd07-4d6c-bd1b-44664f2a091b" (UID: "b8066f26-fd07-4d6c-bd1b-44664f2a091b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.637410 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8066f26-fd07-4d6c-bd1b-44664f2a091b" (UID: "b8066f26-fd07-4d6c-bd1b-44664f2a091b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.651766 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-config" (OuterVolumeSpecName: "config") pod "b8066f26-fd07-4d6c-bd1b-44664f2a091b" (UID: "b8066f26-fd07-4d6c-bd1b-44664f2a091b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.668798 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.668847 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9z6d\" (UniqueName: \"kubernetes.io/projected/b8066f26-fd07-4d6c-bd1b-44664f2a091b-kube-api-access-d9z6d\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.668861 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.668873 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.735118 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b8066f26-fd07-4d6c-bd1b-44664f2a091b" (UID: "b8066f26-fd07-4d6c-bd1b-44664f2a091b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.767104 4834 generic.go:334] "Generic (PLEG): container finished" podID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerID="65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe" exitCode=0 Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.767179 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9dd849bb-b4xzs" event={"ID":"b8066f26-fd07-4d6c-bd1b-44664f2a091b","Type":"ContainerDied","Data":"65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe"} Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.767194 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f9dd849bb-b4xzs" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.767214 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f9dd849bb-b4xzs" event={"ID":"b8066f26-fd07-4d6c-bd1b-44664f2a091b","Type":"ContainerDied","Data":"aacf58f997d39bde0ffc42f7e25cdb3cf5e547683c378882c3c533749789419b"} Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.767233 4834 scope.go:117] "RemoveContainer" containerID="e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.770254 4834 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8066f26-fd07-4d6c-bd1b-44664f2a091b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.771198 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f53e7c29-7c71-4dba-8c9f-2a9accc74294","Type":"ContainerStarted","Data":"1723cbb8157acedc880b5d7714938ee819e163f28121f576b90009526b63bcea"} Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.807341 4834 scope.go:117] "RemoveContainer" containerID="65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.829276 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.834000 4834 scope.go:117] "RemoveContainer" containerID="e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777" Jan 21 14:53:24 crc kubenswrapper[4834]: E0121 14:53:24.838036 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777\": container with ID starting with e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777 not found: ID does not exist" containerID="e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.838077 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777"} err="failed to get container status \"e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777\": rpc error: code = NotFound desc = could not find container \"e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777\": container with ID starting with e76d1ad03ccb789cdc1020265fcef3b928e28585c89e995ab3caab3dfd344777 not found: ID does not exist" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.838101 4834 scope.go:117] "RemoveContainer" containerID="65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe" Jan 21 14:53:24 crc kubenswrapper[4834]: E0121 14:53:24.838465 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe\": container with ID starting with 65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe not found: ID does not exist" containerID="65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.838488 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe"} err="failed to get container status \"65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe\": rpc error: code = NotFound desc = could not find container \"65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe\": container with ID starting with 65e3d5073a7d33db2e837d9f1339281d183feaa93d4eb902e7c98067989140fe not found: ID does not exist" Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.844133 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f9dd849bb-b4xzs"] Jan 21 14:53:24 crc kubenswrapper[4834]: I0121 14:53:24.856605 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f9dd849bb-b4xzs"] Jan 21 14:53:25 crc kubenswrapper[4834]: I0121 14:53:25.784468 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f53e7c29-7c71-4dba-8c9f-2a9accc74294","Type":"ContainerStarted","Data":"7939f32b333f3aaa2768f5185ab1a75ab0e7c265ea2a3d0ff3990714123a7f14"} Jan 21 14:53:25 crc kubenswrapper[4834]: I0121 14:53:25.784675 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerName="cinder-scheduler" containerID="cri-o://ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b" gracePeriod=30 Jan 21 14:53:25 crc kubenswrapper[4834]: I0121 14:53:25.785048 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerName="probe" containerID="cri-o://c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c" gracePeriod=30 Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.338050 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" path="/var/lib/kubelet/pods/b8066f26-fd07-4d6c-bd1b-44664f2a091b/volumes" Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.436528 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.531656 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-87c79555d-cxwgx"] Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.531951 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-87c79555d-cxwgx" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api-log" containerID="cri-o://e5456c5eea9d230570ec458b5098a532062a787b2e38ed573fb4c4efa88da65e" gracePeriod=30 Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.532558 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-87c79555d-cxwgx" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api" containerID="cri-o://e95c53cf888d112fdbdbde8f0fcfaaa4a86cf83cf3a1f38b830150e00835be97" gracePeriod=30 Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.800075 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f53e7c29-7c71-4dba-8c9f-2a9accc74294","Type":"ContainerStarted","Data":"831f7e80b2dd51f5fba704c0975d18b653c249edfc68646cb8a57e93f78ca51e"} Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.802140 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.825894 4834 generic.go:334] "Generic (PLEG): container finished" podID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerID="e5456c5eea9d230570ec458b5098a532062a787b2e38ed573fb4c4efa88da65e" exitCode=143 Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.825967 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c79555d-cxwgx" event={"ID":"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e","Type":"ContainerDied","Data":"e5456c5eea9d230570ec458b5098a532062a787b2e38ed573fb4c4efa88da65e"} Jan 21 14:53:26 crc kubenswrapper[4834]: I0121 14:53:26.849120 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.849075867 podStartE2EDuration="4.849075867s" podCreationTimestamp="2026-01-21 14:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:26.835458735 +0000 UTC m=+1352.809807800" watchObservedRunningTime="2026-01-21 14:53:26.849075867 +0000 UTC m=+1352.823424912" Jan 21 14:53:27 crc kubenswrapper[4834]: I0121 14:53:27.838216 4834 generic.go:334] "Generic (PLEG): container finished" podID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerID="c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c" exitCode=0 Jan 21 14:53:27 crc kubenswrapper[4834]: I0121 14:53:27.838323 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0","Type":"ContainerDied","Data":"c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c"} Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.157470 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.158569 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerName="glance-httpd" containerID="cri-o://88a31cfa48c85b0165aa88edab452893c52414c0c101f61a9dfa58c255844c9d" gracePeriod=30 Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.158738 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerName="glance-log" containerID="cri-o://d103dd9297c1d2baee70d0c4f7d9b78e333f4f0afc7a6a14c41398f6488db835" gracePeriod=30 Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.868236 4834 generic.go:334] "Generic (PLEG): container finished" podID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerID="d103dd9297c1d2baee70d0c4f7d9b78e333f4f0afc7a6a14c41398f6488db835" exitCode=143 Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.868902 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc2d5700-1644-4504-aae6-8bcf6c87363f","Type":"ContainerDied","Data":"d103dd9297c1d2baee70d0c4f7d9b78e333f4f0afc7a6a14c41398f6488db835"} Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.883785 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerStarted","Data":"acf275885a87766e22e50a5e1ce91edb61ca986bfae256e0a29a5ab7684a2f57"} Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.924046 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.924788 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67626626-3343-484b-9c1c-6d7bee71821f" containerName="glance-log" containerID="cri-o://1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8" gracePeriod=30 Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.924925 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="67626626-3343-484b-9c1c-6d7bee71821f" containerName="glance-httpd" containerID="cri-o://a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39" gracePeriod=30 Jan 21 14:53:28 crc kubenswrapper[4834]: I0121 14:53:28.934267 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.900273255 podStartE2EDuration="10.934240861s" podCreationTimestamp="2026-01-21 14:53:18 +0000 UTC" firstStartedPulling="2026-01-21 14:53:20.132719719 +0000 UTC m=+1346.107068774" lastFinishedPulling="2026-01-21 14:53:28.166687335 +0000 UTC m=+1354.141036380" observedRunningTime="2026-01-21 14:53:28.91586779 +0000 UTC m=+1354.890216835" watchObservedRunningTime="2026-01-21 14:53:28.934240861 +0000 UTC m=+1354.908589906" Jan 21 14:53:29 crc kubenswrapper[4834]: I0121 14:53:29.242518 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:29 crc kubenswrapper[4834]: I0121 14:53:29.724045 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-87c79555d-cxwgx" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:41796->10.217.0.161:9311: read: connection reset by peer" Jan 21 14:53:29 crc kubenswrapper[4834]: I0121 14:53:29.724174 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-87c79555d-cxwgx" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:41780->10.217.0.161:9311: read: connection reset by peer" Jan 21 14:53:29 crc kubenswrapper[4834]: I0121 14:53:29.896542 4834 generic.go:334] "Generic (PLEG): container finished" podID="67626626-3343-484b-9c1c-6d7bee71821f" containerID="1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8" exitCode=143 Jan 21 14:53:29 crc kubenswrapper[4834]: I0121 14:53:29.896627 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67626626-3343-484b-9c1c-6d7bee71821f","Type":"ContainerDied","Data":"1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8"} Jan 21 14:53:29 crc kubenswrapper[4834]: I0121 14:53:29.900979 4834 generic.go:334] "Generic (PLEG): container finished" podID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerID="e95c53cf888d112fdbdbde8f0fcfaaa4a86cf83cf3a1f38b830150e00835be97" exitCode=0 Jan 21 14:53:29 crc kubenswrapper[4834]: I0121 14:53:29.901111 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c79555d-cxwgx" event={"ID":"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e","Type":"ContainerDied","Data":"e95c53cf888d112fdbdbde8f0fcfaaa4a86cf83cf3a1f38b830150e00835be97"} Jan 21 14:53:29 crc kubenswrapper[4834]: I0121 14:53:29.901511 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.362104 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.428368 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-combined-ca-bundle\") pod \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.428426 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data-custom\") pod \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.428518 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmzgr\" (UniqueName: \"kubernetes.io/projected/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-kube-api-access-zmzgr\") pod \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.428548 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-logs\") pod \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.428572 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data\") pod \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\" (UID: \"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.432156 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-logs" (OuterVolumeSpecName: "logs") pod "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" (UID: "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.441853 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-kube-api-access-zmzgr" (OuterVolumeSpecName: "kube-api-access-zmzgr") pod "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" (UID: "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e"). InnerVolumeSpecName "kube-api-access-zmzgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.444535 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" (UID: "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.478211 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" (UID: "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.487619 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.526818 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data" (OuterVolumeSpecName: "config-data") pod "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" (UID: "bd3ee5c8-d8c9-42db-913b-98e4c9292b5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.531497 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.531534 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.531544 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmzgr\" (UniqueName: \"kubernetes.io/projected/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-kube-api-access-zmzgr\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.531555 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.531566 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.632570 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data\") pod \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.632785 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-combined-ca-bundle\") pod \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.632827 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-etc-machine-id\") pod \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.633017 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-scripts\") pod \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.633073 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data-custom\") pod \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.633147 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slrqh\" (UniqueName: \"kubernetes.io/projected/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-kube-api-access-slrqh\") pod \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\" (UID: \"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0\") " Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.633285 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" (UID: "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.635732 4834 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.638158 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-scripts" (OuterVolumeSpecName: "scripts") pod "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" (UID: "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.641013 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" (UID: "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.645066 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-kube-api-access-slrqh" (OuterVolumeSpecName: "kube-api-access-slrqh") pod "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" (UID: "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0"). InnerVolumeSpecName "kube-api-access-slrqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.703015 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" (UID: "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.737546 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.737578 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.737590 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slrqh\" (UniqueName: \"kubernetes.io/projected/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-kube-api-access-slrqh\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.737599 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.754673 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data" (OuterVolumeSpecName: "config-data") pod "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" (UID: "3546b8ad-4ee4-46b4-b0bb-fe048935a6e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.839780 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.912778 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-87c79555d-cxwgx" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.912826 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-87c79555d-cxwgx" event={"ID":"bd3ee5c8-d8c9-42db-913b-98e4c9292b5e","Type":"ContainerDied","Data":"8bdbb65d28c073a9d28c95f96fd3a5259228e31c75cc66f965eef6d06284c3a4"} Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.913315 4834 scope.go:117] "RemoveContainer" containerID="e95c53cf888d112fdbdbde8f0fcfaaa4a86cf83cf3a1f38b830150e00835be97" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.914430 4834 generic.go:334] "Generic (PLEG): container finished" podID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerID="ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b" exitCode=0 Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.914481 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.914525 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0","Type":"ContainerDied","Data":"ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b"} Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.914573 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3546b8ad-4ee4-46b4-b0bb-fe048935a6e0","Type":"ContainerDied","Data":"8bfc065ed72c9308c2845e76f3accbf3de0333321df08fc66a1efcca721d93bb"} Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.914875 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="ceilometer-central-agent" containerID="cri-o://812c3bd340e74d50c9d33e4f333c1e389145b76516d8d553ad34bad8cd6449a8" gracePeriod=30 Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.914936 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="sg-core" containerID="cri-o://4a6ed2df30e010201b9a36b01247708744061f14a48a1665bb13fdeb43d9d9bb" gracePeriod=30 Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.914952 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="proxy-httpd" containerID="cri-o://acf275885a87766e22e50a5e1ce91edb61ca986bfae256e0a29a5ab7684a2f57" gracePeriod=30 Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.914950 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="ceilometer-notification-agent" containerID="cri-o://b47ed0f6c8104a3b7953464e5976417f153bf0a91bf10723168f9b3f52ee2cdb" gracePeriod=30 Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.946471 4834 scope.go:117] "RemoveContainer" containerID="e5456c5eea9d230570ec458b5098a532062a787b2e38ed573fb4c4efa88da65e" Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.961269 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:30 crc kubenswrapper[4834]: I0121 14:53:30.969710 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.012368 4834 scope.go:117] "RemoveContainer" containerID="c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.012648 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-87c79555d-cxwgx"] Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.025332 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-87c79555d-cxwgx"] Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.035612 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:31 crc kubenswrapper[4834]: E0121 14:53:31.036278 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerName="probe" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036300 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerName="probe" Jan 21 14:53:31 crc kubenswrapper[4834]: E0121 14:53:31.036313 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerName="cinder-scheduler" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036321 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerName="cinder-scheduler" Jan 21 14:53:31 crc kubenswrapper[4834]: E0121 14:53:31.036342 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api-log" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036351 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api-log" Jan 21 14:53:31 crc kubenswrapper[4834]: E0121 14:53:31.036415 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-httpd" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036424 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-httpd" Jan 21 14:53:31 crc kubenswrapper[4834]: E0121 14:53:31.036434 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-api" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036442 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-api" Jan 21 14:53:31 crc kubenswrapper[4834]: E0121 14:53:31.036456 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036463 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036711 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-api" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036725 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036741 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerName="cinder-scheduler" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036752 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" containerName="barbican-api-log" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036763 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8066f26-fd07-4d6c-bd1b-44664f2a091b" containerName="neutron-httpd" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.036774 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" containerName="probe" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.038371 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.041742 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.047266 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.060016 4834 scope.go:117] "RemoveContainer" containerID="ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.110230 4834 scope.go:117] "RemoveContainer" containerID="c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c" Jan 21 14:53:31 crc kubenswrapper[4834]: E0121 14:53:31.111407 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c\": container with ID starting with c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c not found: ID does not exist" containerID="c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.111478 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c"} err="failed to get container status \"c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c\": rpc error: code = NotFound desc = could not find container \"c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c\": container with ID starting with c07220494d1d62c7323250ee7b50f80e50eec37110b84131f40dc80ad3be750c not found: ID does not exist" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.111519 4834 scope.go:117] "RemoveContainer" containerID="ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b" Jan 21 14:53:31 crc kubenswrapper[4834]: E0121 14:53:31.117388 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b\": container with ID starting with ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b not found: ID does not exist" containerID="ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.117443 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b"} err="failed to get container status \"ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b\": rpc error: code = NotFound desc = could not find container \"ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b\": container with ID starting with ea6272c87a635e8c983feb83c5061d3d0930bd51c95761c2b9206168508bc24b not found: ID does not exist" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.146337 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-scripts\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.146773 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.146802 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mccxd\" (UniqueName: \"kubernetes.io/projected/1e74faea-a792-455c-a253-7012f98c6acf-kube-api-access-mccxd\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.146854 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e74faea-a792-455c-a253-7012f98c6acf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.147010 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.147038 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.248848 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.248959 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mccxd\" (UniqueName: \"kubernetes.io/projected/1e74faea-a792-455c-a253-7012f98c6acf-kube-api-access-mccxd\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.248990 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e74faea-a792-455c-a253-7012f98c6acf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.249102 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.249124 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.249167 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-scripts\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.249241 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e74faea-a792-455c-a253-7012f98c6acf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.254779 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.264523 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-scripts\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.265031 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.268793 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.270570 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mccxd\" (UniqueName: \"kubernetes.io/projected/1e74faea-a792-455c-a253-7012f98c6acf-kube-api-access-mccxd\") pod \"cinder-scheduler-0\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.400122 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.931159 4834 generic.go:334] "Generic (PLEG): container finished" podID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerID="88a31cfa48c85b0165aa88edab452893c52414c0c101f61a9dfa58c255844c9d" exitCode=0 Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.931662 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc2d5700-1644-4504-aae6-8bcf6c87363f","Type":"ContainerDied","Data":"88a31cfa48c85b0165aa88edab452893c52414c0c101f61a9dfa58c255844c9d"} Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.931702 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dc2d5700-1644-4504-aae6-8bcf6c87363f","Type":"ContainerDied","Data":"8044822a47f6b5731478b83713a9a93f4edbf333fa544709065dc92fa0016322"} Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.931717 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8044822a47f6b5731478b83713a9a93f4edbf333fa544709065dc92fa0016322" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.935788 4834 generic.go:334] "Generic (PLEG): container finished" podID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerID="acf275885a87766e22e50a5e1ce91edb61ca986bfae256e0a29a5ab7684a2f57" exitCode=0 Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.935837 4834 generic.go:334] "Generic (PLEG): container finished" podID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerID="4a6ed2df30e010201b9a36b01247708744061f14a48a1665bb13fdeb43d9d9bb" exitCode=2 Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.935846 4834 generic.go:334] "Generic (PLEG): container finished" podID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerID="b47ed0f6c8104a3b7953464e5976417f153bf0a91bf10723168f9b3f52ee2cdb" exitCode=0 Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.935855 4834 generic.go:334] "Generic (PLEG): container finished" podID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerID="812c3bd340e74d50c9d33e4f333c1e389145b76516d8d553ad34bad8cd6449a8" exitCode=0 Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.935936 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerDied","Data":"acf275885a87766e22e50a5e1ce91edb61ca986bfae256e0a29a5ab7684a2f57"} Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.935971 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerDied","Data":"4a6ed2df30e010201b9a36b01247708744061f14a48a1665bb13fdeb43d9d9bb"} Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.936018 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerDied","Data":"b47ed0f6c8104a3b7953464e5976417f153bf0a91bf10723168f9b3f52ee2cdb"} Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.936031 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerDied","Data":"812c3bd340e74d50c9d33e4f333c1e389145b76516d8d553ad34bad8cd6449a8"} Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.936045 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa61cdda-76e7-42fe-b399-6f7cfda22356","Type":"ContainerDied","Data":"b76ed96a04d0b9b436084ce70912fc8f6ce204e246d93f8e6bf5b1741dc001dd"} Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.936055 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76ed96a04d0b9b436084ce70912fc8f6ce204e246d93f8e6bf5b1741dc001dd" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.983182 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:31 crc kubenswrapper[4834]: I0121 14:53:31.996232 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.077603 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l478h\" (UniqueName: \"kubernetes.io/projected/dc2d5700-1644-4504-aae6-8bcf6c87363f-kube-api-access-l478h\") pod \"dc2d5700-1644-4504-aae6-8bcf6c87363f\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.077667 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-log-httpd\") pod \"aa61cdda-76e7-42fe-b399-6f7cfda22356\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.077706 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-combined-ca-bundle\") pod \"aa61cdda-76e7-42fe-b399-6f7cfda22356\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.077800 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls8vk\" (UniqueName: \"kubernetes.io/projected/aa61cdda-76e7-42fe-b399-6f7cfda22356-kube-api-access-ls8vk\") pod \"aa61cdda-76e7-42fe-b399-6f7cfda22356\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.077856 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-run-httpd\") pod \"aa61cdda-76e7-42fe-b399-6f7cfda22356\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.077901 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-httpd-run\") pod \"dc2d5700-1644-4504-aae6-8bcf6c87363f\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.080163 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-sg-core-conf-yaml\") pod \"aa61cdda-76e7-42fe-b399-6f7cfda22356\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.080231 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-combined-ca-bundle\") pod \"dc2d5700-1644-4504-aae6-8bcf6c87363f\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.080284 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-logs\") pod \"dc2d5700-1644-4504-aae6-8bcf6c87363f\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.080328 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"dc2d5700-1644-4504-aae6-8bcf6c87363f\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.080404 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-config-data\") pod \"aa61cdda-76e7-42fe-b399-6f7cfda22356\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.080429 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-scripts\") pod \"dc2d5700-1644-4504-aae6-8bcf6c87363f\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.080467 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-config-data\") pod \"dc2d5700-1644-4504-aae6-8bcf6c87363f\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.080512 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-internal-tls-certs\") pod \"dc2d5700-1644-4504-aae6-8bcf6c87363f\" (UID: \"dc2d5700-1644-4504-aae6-8bcf6c87363f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.080550 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-scripts\") pod \"aa61cdda-76e7-42fe-b399-6f7cfda22356\" (UID: \"aa61cdda-76e7-42fe-b399-6f7cfda22356\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.084506 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa61cdda-76e7-42fe-b399-6f7cfda22356" (UID: "aa61cdda-76e7-42fe-b399-6f7cfda22356"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.086316 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa61cdda-76e7-42fe-b399-6f7cfda22356" (UID: "aa61cdda-76e7-42fe-b399-6f7cfda22356"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.092195 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dc2d5700-1644-4504-aae6-8bcf6c87363f" (UID: "dc2d5700-1644-4504-aae6-8bcf6c87363f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.093216 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-logs" (OuterVolumeSpecName: "logs") pod "dc2d5700-1644-4504-aae6-8bcf6c87363f" (UID: "dc2d5700-1644-4504-aae6-8bcf6c87363f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.095514 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-scripts" (OuterVolumeSpecName: "scripts") pod "dc2d5700-1644-4504-aae6-8bcf6c87363f" (UID: "dc2d5700-1644-4504-aae6-8bcf6c87363f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.101611 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc2d5700-1644-4504-aae6-8bcf6c87363f-kube-api-access-l478h" (OuterVolumeSpecName: "kube-api-access-l478h") pod "dc2d5700-1644-4504-aae6-8bcf6c87363f" (UID: "dc2d5700-1644-4504-aae6-8bcf6c87363f"). InnerVolumeSpecName "kube-api-access-l478h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.123635 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "dc2d5700-1644-4504-aae6-8bcf6c87363f" (UID: "dc2d5700-1644-4504-aae6-8bcf6c87363f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.126446 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.126483 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.126496 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.126519 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa61cdda-76e7-42fe-b399-6f7cfda22356-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.126534 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc2d5700-1644-4504-aae6-8bcf6c87363f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.134057 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.148738 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-scripts" (OuterVolumeSpecName: "scripts") pod "aa61cdda-76e7-42fe-b399-6f7cfda22356" (UID: "aa61cdda-76e7-42fe-b399-6f7cfda22356"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.159832 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa61cdda-76e7-42fe-b399-6f7cfda22356" (UID: "aa61cdda-76e7-42fe-b399-6f7cfda22356"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.172394 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa61cdda-76e7-42fe-b399-6f7cfda22356-kube-api-access-ls8vk" (OuterVolumeSpecName: "kube-api-access-ls8vk") pod "aa61cdda-76e7-42fe-b399-6f7cfda22356" (UID: "aa61cdda-76e7-42fe-b399-6f7cfda22356"). InnerVolumeSpecName "kube-api-access-ls8vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.193274 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc2d5700-1644-4504-aae6-8bcf6c87363f" (UID: "dc2d5700-1644-4504-aae6-8bcf6c87363f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.228919 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.229098 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.229123 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.229133 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.229145 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l478h\" (UniqueName: \"kubernetes.io/projected/dc2d5700-1644-4504-aae6-8bcf6c87363f-kube-api-access-l478h\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.229157 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls8vk\" (UniqueName: \"kubernetes.io/projected/aa61cdda-76e7-42fe-b399-6f7cfda22356-kube-api-access-ls8vk\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.250600 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.256231 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa61cdda-76e7-42fe-b399-6f7cfda22356" (UID: "aa61cdda-76e7-42fe-b399-6f7cfda22356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.265089 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dc2d5700-1644-4504-aae6-8bcf6c87363f" (UID: "dc2d5700-1644-4504-aae6-8bcf6c87363f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.275220 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-config-data" (OuterVolumeSpecName: "config-data") pod "dc2d5700-1644-4504-aae6-8bcf6c87363f" (UID: "dc2d5700-1644-4504-aae6-8bcf6c87363f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.313969 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-config-data" (OuterVolumeSpecName: "config-data") pod "aa61cdda-76e7-42fe-b399-6f7cfda22356" (UID: "aa61cdda-76e7-42fe-b399-6f7cfda22356"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.331126 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.331154 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.331166 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.331176 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc2d5700-1644-4504-aae6-8bcf6c87363f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.331186 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa61cdda-76e7-42fe-b399-6f7cfda22356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.340208 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3546b8ad-4ee4-46b4-b0bb-fe048935a6e0" path="/var/lib/kubelet/pods/3546b8ad-4ee4-46b4-b0bb-fe048935a6e0/volumes" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.341911 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3ee5c8-d8c9-42db-913b-98e4c9292b5e" path="/var/lib/kubelet/pods/bd3ee5c8-d8c9-42db-913b-98e4c9292b5e/volumes" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.695985 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.840686 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-logs\") pod \"67626626-3343-484b-9c1c-6d7bee71821f\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.841357 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd7g2\" (UniqueName: \"kubernetes.io/projected/67626626-3343-484b-9c1c-6d7bee71821f-kube-api-access-pd7g2\") pod \"67626626-3343-484b-9c1c-6d7bee71821f\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.841398 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-config-data\") pod \"67626626-3343-484b-9c1c-6d7bee71821f\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.841543 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-scripts\") pod \"67626626-3343-484b-9c1c-6d7bee71821f\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.841644 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-httpd-run\") pod \"67626626-3343-484b-9c1c-6d7bee71821f\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.841669 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"67626626-3343-484b-9c1c-6d7bee71821f\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.841730 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-combined-ca-bundle\") pod \"67626626-3343-484b-9c1c-6d7bee71821f\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.841776 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-public-tls-certs\") pod \"67626626-3343-484b-9c1c-6d7bee71821f\" (UID: \"67626626-3343-484b-9c1c-6d7bee71821f\") " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.845165 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-logs" (OuterVolumeSpecName: "logs") pod "67626626-3343-484b-9c1c-6d7bee71821f" (UID: "67626626-3343-484b-9c1c-6d7bee71821f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.847877 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "67626626-3343-484b-9c1c-6d7bee71821f" (UID: "67626626-3343-484b-9c1c-6d7bee71821f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.875350 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67626626-3343-484b-9c1c-6d7bee71821f-kube-api-access-pd7g2" (OuterVolumeSpecName: "kube-api-access-pd7g2") pod "67626626-3343-484b-9c1c-6d7bee71821f" (UID: "67626626-3343-484b-9c1c-6d7bee71821f"). InnerVolumeSpecName "kube-api-access-pd7g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.875836 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-scripts" (OuterVolumeSpecName: "scripts") pod "67626626-3343-484b-9c1c-6d7bee71821f" (UID: "67626626-3343-484b-9c1c-6d7bee71821f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.878675 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "67626626-3343-484b-9c1c-6d7bee71821f" (UID: "67626626-3343-484b-9c1c-6d7bee71821f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.929183 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "67626626-3343-484b-9c1c-6d7bee71821f" (UID: "67626626-3343-484b-9c1c-6d7bee71821f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.940213 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67626626-3343-484b-9c1c-6d7bee71821f" (UID: "67626626-3343-484b-9c1c-6d7bee71821f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.945592 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.945614 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.945654 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.945663 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.945674 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.945682 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67626626-3343-484b-9c1c-6d7bee71821f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.966848 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-config-data" (OuterVolumeSpecName: "config-data") pod "67626626-3343-484b-9c1c-6d7bee71821f" (UID: "67626626-3343-484b-9c1c-6d7bee71821f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.971817 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-trz62"] Jan 21 14:53:32 crc kubenswrapper[4834]: E0121 14:53:32.972709 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67626626-3343-484b-9c1c-6d7bee71821f" containerName="glance-log" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.972728 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="67626626-3343-484b-9c1c-6d7bee71821f" containerName="glance-log" Jan 21 14:53:32 crc kubenswrapper[4834]: E0121 14:53:32.972742 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="ceilometer-central-agent" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.972749 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="ceilometer-central-agent" Jan 21 14:53:32 crc kubenswrapper[4834]: E0121 14:53:32.972767 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="ceilometer-notification-agent" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.972774 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="ceilometer-notification-agent" Jan 21 14:53:32 crc kubenswrapper[4834]: E0121 14:53:32.972786 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerName="glance-httpd" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.972793 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerName="glance-httpd" Jan 21 14:53:32 crc kubenswrapper[4834]: E0121 14:53:32.972813 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="proxy-httpd" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.972821 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="proxy-httpd" Jan 21 14:53:32 crc kubenswrapper[4834]: E0121 14:53:32.972835 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="sg-core" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.972842 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="sg-core" Jan 21 14:53:32 crc kubenswrapper[4834]: E0121 14:53:32.972858 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67626626-3343-484b-9c1c-6d7bee71821f" containerName="glance-httpd" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.972864 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="67626626-3343-484b-9c1c-6d7bee71821f" containerName="glance-httpd" Jan 21 14:53:32 crc kubenswrapper[4834]: E0121 14:53:32.972877 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerName="glance-log" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.972884 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerName="glance-log" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.973071 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="67626626-3343-484b-9c1c-6d7bee71821f" containerName="glance-log" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.973086 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerName="glance-log" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.973095 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="67626626-3343-484b-9c1c-6d7bee71821f" containerName="glance-httpd" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.973106 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2d5700-1644-4504-aae6-8bcf6c87363f" containerName="glance-httpd" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.973120 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="sg-core" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.973138 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="proxy-httpd" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.973146 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="ceilometer-central-agent" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.973153 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" containerName="ceilometer-notification-agent" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.973838 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:32 crc kubenswrapper[4834]: I0121 14:53:32.945689 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd7g2\" (UniqueName: \"kubernetes.io/projected/67626626-3343-484b-9c1c-6d7bee71821f-kube-api-access-pd7g2\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.018000 4834 generic.go:334] "Generic (PLEG): container finished" podID="67626626-3343-484b-9c1c-6d7bee71821f" containerID="a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39" exitCode=0 Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.018024 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-trz62"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.018091 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67626626-3343-484b-9c1c-6d7bee71821f","Type":"ContainerDied","Data":"a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39"} Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.018136 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"67626626-3343-484b-9c1c-6d7bee71821f","Type":"ContainerDied","Data":"c16803ecddc8342918bf18bb9c84202b6504712897fa1d3d451fd329052f1e62"} Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.018143 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.018183 4834 scope.go:117] "RemoveContainer" containerID="a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.038713 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.039862 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.039921 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1e74faea-a792-455c-a253-7012f98c6acf","Type":"ContainerStarted","Data":"d223903295050d2c693147a86f9aa565224340b5b63df996f2e8a775a145cba1"} Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.060947 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.100950 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f5b0fd2-ece5-4adc-9b6b-25103f997228-operator-scripts\") pod \"nova-api-db-create-trz62\" (UID: \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\") " pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.101078 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgvjg\" (UniqueName: \"kubernetes.io/projected/3f5b0fd2-ece5-4adc-9b6b-25103f997228-kube-api-access-zgvjg\") pod \"nova-api-db-create-trz62\" (UID: \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\") " pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.101149 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67626626-3343-484b-9c1c-6d7bee71821f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.101162 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.113767 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qxnvw"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.115737 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.137308 4834 scope.go:117] "RemoveContainer" containerID="1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.138432 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qxnvw"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.153273 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.166710 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.196119 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.209967 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.210304 4834 scope.go:117] "RemoveContainer" containerID="a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.211376 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgvjg\" (UniqueName: \"kubernetes.io/projected/3f5b0fd2-ece5-4adc-9b6b-25103f997228-kube-api-access-zgvjg\") pod \"nova-api-db-create-trz62\" (UID: \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\") " pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.211615 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bx6w\" (UniqueName: \"kubernetes.io/projected/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-kube-api-access-2bx6w\") pod \"nova-cell0-db-create-qxnvw\" (UID: \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\") " pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.211817 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f5b0fd2-ece5-4adc-9b6b-25103f997228-operator-scripts\") pod \"nova-api-db-create-trz62\" (UID: \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\") " pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.211903 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-operator-scripts\") pod \"nova-cell0-db-create-qxnvw\" (UID: \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\") " pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.212545 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: E0121 14:53:33.214323 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39\": container with ID starting with a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39 not found: ID does not exist" containerID="a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.214363 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39"} err="failed to get container status \"a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39\": rpc error: code = NotFound desc = could not find container \"a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39\": container with ID starting with a08b552e999872264c177f0f8b500961bef42bff13c6cf9e21da7b3d94f20b39 not found: ID does not exist" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.214391 4834 scope.go:117] "RemoveContainer" containerID="1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.220497 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f5b0fd2-ece5-4adc-9b6b-25103f997228-operator-scripts\") pod \"nova-api-db-create-trz62\" (UID: \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\") " pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.226697 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 14:53:33 crc kubenswrapper[4834]: E0121 14:53:33.226809 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8\": container with ID starting with 1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8 not found: ID does not exist" containerID="1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.226900 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z26b8" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.226841 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8"} err="failed to get container status \"1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8\": rpc error: code = NotFound desc = could not find container \"1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8\": container with ID starting with 1405efe0806f5b8240cf1502a600a52cf9482a478aac0a4db41be5158a9f76c8 not found: ID does not exist" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.227040 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.227158 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.227305 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.244601 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.247497 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.259534 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.260170 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.263705 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.265111 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgvjg\" (UniqueName: \"kubernetes.io/projected/3f5b0fd2-ece5-4adc-9b6b-25103f997228-kube-api-access-zgvjg\") pod \"nova-api-db-create-trz62\" (UID: \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\") " pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.274379 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.289241 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.300025 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.312996 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1045-account-create-update-dd95j"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.314274 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bx6w\" (UniqueName: \"kubernetes.io/projected/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-kube-api-access-2bx6w\") pod \"nova-cell0-db-create-qxnvw\" (UID: \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\") " pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.314373 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-operator-scripts\") pod \"nova-cell0-db-create-qxnvw\" (UID: \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\") " pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.314568 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.315059 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-operator-scripts\") pod \"nova-cell0-db-create-qxnvw\" (UID: \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\") " pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.318039 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.355967 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.369562 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.371041 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.381874 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.382071 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1045-account-create-update-dd95j"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.387451 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.437665 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bx6w\" (UniqueName: \"kubernetes.io/projected/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-kube-api-access-2bx6w\") pod \"nova-cell0-db-create-qxnvw\" (UID: \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\") " pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.498890 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwz7w\" (UniqueName: \"kubernetes.io/projected/4734873a-25bb-46e2-bb5b-68f9cd776682-kube-api-access-pwz7w\") pod \"nova-api-1045-account-create-update-dd95j\" (UID: \"4734873a-25bb-46e2-bb5b-68f9cd776682\") " pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499141 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499175 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-run-httpd\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499207 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4734873a-25bb-46e2-bb5b-68f9cd776682-operator-scripts\") pod \"nova-api-1045-account-create-update-dd95j\" (UID: \"4734873a-25bb-46e2-bb5b-68f9cd776682\") " pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499237 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-scripts\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499270 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499333 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvw9\" (UniqueName: \"kubernetes.io/projected/70fcc671-cdfe-4459-aca3-c81262989590-kube-api-access-bxvw9\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499379 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499402 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-config-data\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499447 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptsnx\" (UniqueName: \"kubernetes.io/projected/a471c86e-9e4a-4aba-848a-75aefa12c239-kube-api-access-ptsnx\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499473 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499509 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-config-data\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499539 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-scripts\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499576 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-logs\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499622 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499646 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.499691 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-log-httpd\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.545785 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.657466 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.657560 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.657627 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658163 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658197 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-log-httpd\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658603 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658630 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwz7w\" (UniqueName: \"kubernetes.io/projected/4734873a-25bb-46e2-bb5b-68f9cd776682-kube-api-access-pwz7w\") pod \"nova-api-1045-account-create-update-dd95j\" (UID: \"4734873a-25bb-46e2-bb5b-68f9cd776682\") " pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658716 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658762 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz8rp\" (UniqueName: \"kubernetes.io/projected/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-kube-api-access-wz8rp\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658795 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658849 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658870 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-run-httpd\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.658956 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4734873a-25bb-46e2-bb5b-68f9cd776682-operator-scripts\") pod \"nova-api-1045-account-create-update-dd95j\" (UID: \"4734873a-25bb-46e2-bb5b-68f9cd776682\") " pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659005 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-scripts\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659095 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659119 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659212 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvw9\" (UniqueName: \"kubernetes.io/projected/70fcc671-cdfe-4459-aca3-c81262989590-kube-api-access-bxvw9\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659281 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659350 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659378 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-config-data\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659464 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptsnx\" (UniqueName: \"kubernetes.io/projected/a471c86e-9e4a-4aba-848a-75aefa12c239-kube-api-access-ptsnx\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659542 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659632 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-config-data\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659719 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-scripts\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.659802 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-logs\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.666908 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.667620 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.670368 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-log-httpd\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.672438 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-logs\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.672914 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-run-httpd\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.673397 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.673454 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4734873a-25bb-46e2-bb5b-68f9cd776682-operator-scripts\") pod \"nova-api-1045-account-create-update-dd95j\" (UID: \"4734873a-25bb-46e2-bb5b-68f9cd776682\") " pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.674426 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-scripts\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.675194 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.684059 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.694907 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-sfvnk"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.698140 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptsnx\" (UniqueName: \"kubernetes.io/projected/a471c86e-9e4a-4aba-848a-75aefa12c239-kube-api-access-ptsnx\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.698192 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-scripts\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.698567 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.699051 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-config-data\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.699551 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-config-data\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.702567 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.728249 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvw9\" (UniqueName: \"kubernetes.io/projected/70fcc671-cdfe-4459-aca3-c81262989590-kube-api-access-bxvw9\") pod \"ceilometer-0\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.768310 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.771888 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwz7w\" (UniqueName: \"kubernetes.io/projected/4734873a-25bb-46e2-bb5b-68f9cd776682-kube-api-access-pwz7w\") pod \"nova-api-1045-account-create-update-dd95j\" (UID: \"4734873a-25bb-46e2-bb5b-68f9cd776682\") " pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777094 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777173 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee9e0a88-3e00-4c83-860f-25a0d932f773-operator-scripts\") pod \"nova-cell1-db-create-sfvnk\" (UID: \"ee9e0a88-3e00-4c83-860f-25a0d932f773\") " pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777206 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777271 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777304 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777324 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmt5w\" (UniqueName: \"kubernetes.io/projected/ee9e0a88-3e00-4c83-860f-25a0d932f773-kube-api-access-gmt5w\") pod \"nova-cell1-db-create-sfvnk\" (UID: \"ee9e0a88-3e00-4c83-860f-25a0d932f773\") " pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777374 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777401 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777457 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz8rp\" (UniqueName: \"kubernetes.io/projected/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-kube-api-access-wz8rp\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.777479 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.782166 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.784624 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.784901 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.801025 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sfvnk"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.821347 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.830297 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz8rp\" (UniqueName: \"kubernetes.io/projected/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-kube-api-access-wz8rp\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.839209 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.845170 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.862720 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.865500 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.888843 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-12d4-account-create-update-qnb6b"] Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.890127 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.898621 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee9e0a88-3e00-4c83-860f-25a0d932f773-operator-scripts\") pod \"nova-cell1-db-create-sfvnk\" (UID: \"ee9e0a88-3e00-4c83-860f-25a0d932f773\") " pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.898713 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmt5w\" (UniqueName: \"kubernetes.io/projected/ee9e0a88-3e00-4c83-860f-25a0d932f773-kube-api-access-gmt5w\") pod \"nova-cell1-db-create-sfvnk\" (UID: \"ee9e0a88-3e00-4c83-860f-25a0d932f773\") " pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.899639 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee9e0a88-3e00-4c83-860f-25a0d932f773-operator-scripts\") pod \"nova-cell1-db-create-sfvnk\" (UID: \"ee9e0a88-3e00-4c83-860f-25a0d932f773\") " pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.911809 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.936507 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmt5w\" (UniqueName: \"kubernetes.io/projected/ee9e0a88-3e00-4c83-860f-25a0d932f773-kube-api-access-gmt5w\") pod \"nova-cell1-db-create-sfvnk\" (UID: \"ee9e0a88-3e00-4c83-860f-25a0d932f773\") " pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.943299 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.957595 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.957685 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " pod="openstack/glance-default-internal-api-0" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.992106 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:33 crc kubenswrapper[4834]: I0121 14:53:33.998060 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-12d4-account-create-update-qnb6b"] Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.006021 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjzts\" (UniqueName: \"kubernetes.io/projected/5c79cc29-09cc-4649-815d-8e5ea52e05c9-kube-api-access-vjzts\") pod \"nova-cell0-12d4-account-create-update-qnb6b\" (UID: \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\") " pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.006093 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c79cc29-09cc-4649-815d-8e5ea52e05c9-operator-scripts\") pod \"nova-cell0-12d4-account-create-update-qnb6b\" (UID: \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\") " pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.055528 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.091511 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.099507 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-73cb-account-create-update-qghqp"] Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.109787 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjzts\" (UniqueName: \"kubernetes.io/projected/5c79cc29-09cc-4649-815d-8e5ea52e05c9-kube-api-access-vjzts\") pod \"nova-cell0-12d4-account-create-update-qnb6b\" (UID: \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\") " pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.109838 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c79cc29-09cc-4649-815d-8e5ea52e05c9-operator-scripts\") pod \"nova-cell0-12d4-account-create-update-qnb6b\" (UID: \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\") " pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.111552 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c79cc29-09cc-4649-815d-8e5ea52e05c9-operator-scripts\") pod \"nova-cell0-12d4-account-create-update-qnb6b\" (UID: \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\") " pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.127332 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-73cb-account-create-update-qghqp"] Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.127626 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.134273 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.136500 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjzts\" (UniqueName: \"kubernetes.io/projected/5c79cc29-09cc-4649-815d-8e5ea52e05c9-kube-api-access-vjzts\") pod \"nova-cell0-12d4-account-create-update-qnb6b\" (UID: \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\") " pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.196366 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.216798 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfx45\" (UniqueName: \"kubernetes.io/projected/123d466f-e93e-478a-8732-465c6099201b-kube-api-access-hfx45\") pod \"nova-cell1-73cb-account-create-update-qghqp\" (UID: \"123d466f-e93e-478a-8732-465c6099201b\") " pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.216889 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123d466f-e93e-478a-8732-465c6099201b-operator-scripts\") pod \"nova-cell1-73cb-account-create-update-qghqp\" (UID: \"123d466f-e93e-478a-8732-465c6099201b\") " pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.329486 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfx45\" (UniqueName: \"kubernetes.io/projected/123d466f-e93e-478a-8732-465c6099201b-kube-api-access-hfx45\") pod \"nova-cell1-73cb-account-create-update-qghqp\" (UID: \"123d466f-e93e-478a-8732-465c6099201b\") " pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.329994 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123d466f-e93e-478a-8732-465c6099201b-operator-scripts\") pod \"nova-cell1-73cb-account-create-update-qghqp\" (UID: \"123d466f-e93e-478a-8732-465c6099201b\") " pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.331084 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123d466f-e93e-478a-8732-465c6099201b-operator-scripts\") pod \"nova-cell1-73cb-account-create-update-qghqp\" (UID: \"123d466f-e93e-478a-8732-465c6099201b\") " pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.342608 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67626626-3343-484b-9c1c-6d7bee71821f" path="/var/lib/kubelet/pods/67626626-3343-484b-9c1c-6d7bee71821f/volumes" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.343533 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa61cdda-76e7-42fe-b399-6f7cfda22356" path="/var/lib/kubelet/pods/aa61cdda-76e7-42fe-b399-6f7cfda22356/volumes" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.345170 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc2d5700-1644-4504-aae6-8bcf6c87363f" path="/var/lib/kubelet/pods/dc2d5700-1644-4504-aae6-8bcf6c87363f/volumes" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.362509 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfx45\" (UniqueName: \"kubernetes.io/projected/123d466f-e93e-478a-8732-465c6099201b-kube-api-access-hfx45\") pod \"nova-cell1-73cb-account-create-update-qghqp\" (UID: \"123d466f-e93e-478a-8732-465c6099201b\") " pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.423368 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-trz62"] Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.525609 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:34 crc kubenswrapper[4834]: W0121 14:53:34.528298 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f5b0fd2_ece5_4adc_9b6b_25103f997228.slice/crio-b75b4fdfd8b84f77d71adac078981fea327c1427ef7595a79ad1ee9b6fff5f5c WatchSource:0}: Error finding container b75b4fdfd8b84f77d71adac078981fea327c1427ef7595a79ad1ee9b6fff5f5c: Status 404 returned error can't find the container with id b75b4fdfd8b84f77d71adac078981fea327c1427ef7595a79ad1ee9b6fff5f5c Jan 21 14:53:34 crc kubenswrapper[4834]: I0121 14:53:34.608213 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qxnvw"] Jan 21 14:53:34 crc kubenswrapper[4834]: W0121 14:53:34.717608 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97616acb_173e_474b_a7f7_dcbc8bd2f0a6.slice/crio-41247b4d2116d23de0f6844c1c844b52f5cc55e206103fef65d7f19a295a98ac WatchSource:0}: Error finding container 41247b4d2116d23de0f6844c1c844b52f5cc55e206103fef65d7f19a295a98ac: Status 404 returned error can't find the container with id 41247b4d2116d23de0f6844c1c844b52f5cc55e206103fef65d7f19a295a98ac Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.093966 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2","Type":"ContainerStarted","Data":"d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830"} Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.108751 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1e74faea-a792-455c-a253-7012f98c6acf","Type":"ContainerStarted","Data":"db044be1aefe255c10ce8baeb7d8226a8e00d8f2d732191c49cc1dce3a593cd6"} Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.111635 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-trz62" event={"ID":"3f5b0fd2-ece5-4adc-9b6b-25103f997228","Type":"ContainerStarted","Data":"1da4708d65368042feab6ed33ff9ce319ef9460c7feb86ef2980a44d86a2a1dc"} Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.111668 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-trz62" event={"ID":"3f5b0fd2-ece5-4adc-9b6b-25103f997228","Type":"ContainerStarted","Data":"b75b4fdfd8b84f77d71adac078981fea327c1427ef7595a79ad1ee9b6fff5f5c"} Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.164840 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.487003315 podStartE2EDuration="44.164814033s" podCreationTimestamp="2026-01-21 14:52:51 +0000 UTC" firstStartedPulling="2026-01-21 14:52:52.271064206 +0000 UTC m=+1318.245413291" lastFinishedPulling="2026-01-21 14:53:33.948874964 +0000 UTC m=+1359.923224009" observedRunningTime="2026-01-21 14:53:35.115386749 +0000 UTC m=+1361.089735794" watchObservedRunningTime="2026-01-21 14:53:35.164814033 +0000 UTC m=+1361.139163078" Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.181283 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qxnvw" event={"ID":"97616acb-173e-474b-a7f7-dcbc8bd2f0a6","Type":"ContainerStarted","Data":"41247b4d2116d23de0f6844c1c844b52f5cc55e206103fef65d7f19a295a98ac"} Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.184100 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-trz62" podStartSLOduration=3.184081762 podStartE2EDuration="3.184081762s" podCreationTimestamp="2026-01-21 14:53:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:35.134009197 +0000 UTC m=+1361.108358252" watchObservedRunningTime="2026-01-21 14:53:35.184081762 +0000 UTC m=+1361.158430807" Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.299162 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.517428 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1045-account-create-update-dd95j"] Jan 21 14:53:35 crc kubenswrapper[4834]: W0121 14:53:35.554147 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70fcc671_cdfe_4459_aca3_c81262989590.slice/crio-ea9e4a9e4aed18ff5c0976c23e76566b00d63ca342a1f36a32348efc094a43cd WatchSource:0}: Error finding container ea9e4a9e4aed18ff5c0976c23e76566b00d63ca342a1f36a32348efc094a43cd: Status 404 returned error can't find the container with id ea9e4a9e4aed18ff5c0976c23e76566b00d63ca342a1f36a32348efc094a43cd Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.564210 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.596593 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sfvnk"] Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.608004 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-73cb-account-create-update-qghqp"] Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.617512 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-12d4-account-create-update-qnb6b"] Jan 21 14:53:35 crc kubenswrapper[4834]: I0121 14:53:35.631516 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.211574 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a471c86e-9e4a-4aba-848a-75aefa12c239","Type":"ContainerStarted","Data":"3ce00cf2fad839def106c12c9fffdc46329ab8ef0cb444963c763fe9c17ff6dc"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.234220 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" event={"ID":"5c79cc29-09cc-4649-815d-8e5ea52e05c9","Type":"ContainerStarted","Data":"5d95c5900e54f7ef25d51b50987bb7d4d4e33beb12abd0cd83bff8517381fea7"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.251221 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sfvnk" event={"ID":"ee9e0a88-3e00-4c83-860f-25a0d932f773","Type":"ContainerStarted","Data":"089e0fbec33ebd3365fc37c9fffba9d0eeddd27510165d4aca2f0739ad5323ab"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.421125 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerStarted","Data":"ea9e4a9e4aed18ff5c0976c23e76566b00d63ca342a1f36a32348efc094a43cd"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.421182 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1e74faea-a792-455c-a253-7012f98c6acf","Type":"ContainerStarted","Data":"83df69fcbb26d2aebb7daf416be409e283ad79ee46ccc601f9324e32b0922177"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.431362 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2","Type":"ContainerStarted","Data":"819bd9c965bf2ba263ce309dcf46a6a548468ca24472b2286bf426740d233239"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.444739 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.444713749 podStartE2EDuration="6.444713749s" podCreationTimestamp="2026-01-21 14:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:36.442064637 +0000 UTC m=+1362.416413692" watchObservedRunningTime="2026-01-21 14:53:36.444713749 +0000 UTC m=+1362.419062794" Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.477439 4834 generic.go:334] "Generic (PLEG): container finished" podID="3f5b0fd2-ece5-4adc-9b6b-25103f997228" containerID="1da4708d65368042feab6ed33ff9ce319ef9460c7feb86ef2980a44d86a2a1dc" exitCode=0 Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.478341 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-trz62" event={"ID":"3f5b0fd2-ece5-4adc-9b6b-25103f997228","Type":"ContainerDied","Data":"1da4708d65368042feab6ed33ff9ce319ef9460c7feb86ef2980a44d86a2a1dc"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.505913 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.529609 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1045-account-create-update-dd95j" event={"ID":"4734873a-25bb-46e2-bb5b-68f9cd776682","Type":"ContainerStarted","Data":"2f3932789a99d5835acfc99b1868b189cbc0e881b542fe12d0bd0edb9430be56"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.529695 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1045-account-create-update-dd95j" event={"ID":"4734873a-25bb-46e2-bb5b-68f9cd776682","Type":"ContainerStarted","Data":"2b608e1ebe6c4c33e9f545ccac2ada9a87dfcf1df24f9cf159035309ac2204fb"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.534246 4834 generic.go:334] "Generic (PLEG): container finished" podID="97616acb-173e-474b-a7f7-dcbc8bd2f0a6" containerID="022b4dcefb10e6849119a581d8764d474f56076acec2b622fe0ded8cd0b6117c" exitCode=0 Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.534333 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qxnvw" event={"ID":"97616acb-173e-474b-a7f7-dcbc8bd2f0a6","Type":"ContainerDied","Data":"022b4dcefb10e6849119a581d8764d474f56076acec2b622fe0ded8cd0b6117c"} Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.575660 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-1045-account-create-update-dd95j" podStartSLOduration=3.575632004 podStartE2EDuration="3.575632004s" podCreationTimestamp="2026-01-21 14:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:36.546303874 +0000 UTC m=+1362.520652919" watchObservedRunningTime="2026-01-21 14:53:36.575632004 +0000 UTC m=+1362.549981059" Jan 21 14:53:36 crc kubenswrapper[4834]: I0121 14:53:36.583751 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-73cb-account-create-update-qghqp" event={"ID":"123d466f-e93e-478a-8732-465c6099201b","Type":"ContainerStarted","Data":"c35ee75887d01ed9868821d823ef22b70ba12b098ee27b5b9fb95ee6becdda7c"} Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.316600 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.627725 4834 generic.go:334] "Generic (PLEG): container finished" podID="ee9e0a88-3e00-4c83-860f-25a0d932f773" containerID="3091fd5a5129a158db5ebfe8939671c846d9d2c5198ff4166ff9ed88062b7d32" exitCode=0 Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.628287 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sfvnk" event={"ID":"ee9e0a88-3e00-4c83-860f-25a0d932f773","Type":"ContainerDied","Data":"3091fd5a5129a158db5ebfe8939671c846d9d2c5198ff4166ff9ed88062b7d32"} Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.634041 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerStarted","Data":"c620dd7ee3e093edd450650a3caaba7af66ba9591d39d946f01c02f5d3938eb2"} Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.637957 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2","Type":"ContainerStarted","Data":"b12a6f3114054a9898093e473e21761dd4f9e13b5dde59cc5ef5df0092d0285d"} Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.646312 4834 generic.go:334] "Generic (PLEG): container finished" podID="4734873a-25bb-46e2-bb5b-68f9cd776682" containerID="2f3932789a99d5835acfc99b1868b189cbc0e881b542fe12d0bd0edb9430be56" exitCode=0 Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.646500 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1045-account-create-update-dd95j" event={"ID":"4734873a-25bb-46e2-bb5b-68f9cd776682","Type":"ContainerDied","Data":"2f3932789a99d5835acfc99b1868b189cbc0e881b542fe12d0bd0edb9430be56"} Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.660313 4834 generic.go:334] "Generic (PLEG): container finished" podID="123d466f-e93e-478a-8732-465c6099201b" containerID="bd82e25130556e19b66daf7d96d6d14fa88862acddfe74bca857b1a15c8bb86c" exitCode=0 Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.660428 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-73cb-account-create-update-qghqp" event={"ID":"123d466f-e93e-478a-8732-465c6099201b","Type":"ContainerDied","Data":"bd82e25130556e19b66daf7d96d6d14fa88862acddfe74bca857b1a15c8bb86c"} Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.676856 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a471c86e-9e4a-4aba-848a-75aefa12c239","Type":"ContainerStarted","Data":"42b08d33f6e569457d53d7d0fb1dde4b71a0fbe929495ac2886bed4d36c39ea2"} Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.682561 4834 generic.go:334] "Generic (PLEG): container finished" podID="5c79cc29-09cc-4649-815d-8e5ea52e05c9" containerID="e2e5f7a516e7bf1e00be01f5c942a902e3d70b5381d1e5c1009574470b8a35af" exitCode=0 Jan 21 14:53:37 crc kubenswrapper[4834]: I0121 14:53:37.683562 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" event={"ID":"5c79cc29-09cc-4649-815d-8e5ea52e05c9","Type":"ContainerDied","Data":"e2e5f7a516e7bf1e00be01f5c942a902e3d70b5381d1e5c1009574470b8a35af"} Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.271028 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.276680 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.432670 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgvjg\" (UniqueName: \"kubernetes.io/projected/3f5b0fd2-ece5-4adc-9b6b-25103f997228-kube-api-access-zgvjg\") pod \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\" (UID: \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\") " Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.432736 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bx6w\" (UniqueName: \"kubernetes.io/projected/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-kube-api-access-2bx6w\") pod \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\" (UID: \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\") " Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.432850 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-operator-scripts\") pod \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\" (UID: \"97616acb-173e-474b-a7f7-dcbc8bd2f0a6\") " Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.432896 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f5b0fd2-ece5-4adc-9b6b-25103f997228-operator-scripts\") pod \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\" (UID: \"3f5b0fd2-ece5-4adc-9b6b-25103f997228\") " Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.436437 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97616acb-173e-474b-a7f7-dcbc8bd2f0a6" (UID: "97616acb-173e-474b-a7f7-dcbc8bd2f0a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.437409 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f5b0fd2-ece5-4adc-9b6b-25103f997228-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f5b0fd2-ece5-4adc-9b6b-25103f997228" (UID: "3f5b0fd2-ece5-4adc-9b6b-25103f997228"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.461291 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-kube-api-access-2bx6w" (OuterVolumeSpecName: "kube-api-access-2bx6w") pod "97616acb-173e-474b-a7f7-dcbc8bd2f0a6" (UID: "97616acb-173e-474b-a7f7-dcbc8bd2f0a6"). InnerVolumeSpecName "kube-api-access-2bx6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.461401 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5b0fd2-ece5-4adc-9b6b-25103f997228-kube-api-access-zgvjg" (OuterVolumeSpecName: "kube-api-access-zgvjg") pod "3f5b0fd2-ece5-4adc-9b6b-25103f997228" (UID: "3f5b0fd2-ece5-4adc-9b6b-25103f997228"). InnerVolumeSpecName "kube-api-access-zgvjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.535291 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgvjg\" (UniqueName: \"kubernetes.io/projected/3f5b0fd2-ece5-4adc-9b6b-25103f997228-kube-api-access-zgvjg\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.535333 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bx6w\" (UniqueName: \"kubernetes.io/projected/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-kube-api-access-2bx6w\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.535346 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97616acb-173e-474b-a7f7-dcbc8bd2f0a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.535355 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f5b0fd2-ece5-4adc-9b6b-25103f997228-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.695758 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2","Type":"ContainerStarted","Data":"4cdf01d893881ac1c03282850be08114203e3678d346c96a0db7e061046cf925"} Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.701531 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-trz62" event={"ID":"3f5b0fd2-ece5-4adc-9b6b-25103f997228","Type":"ContainerDied","Data":"b75b4fdfd8b84f77d71adac078981fea327c1427ef7595a79ad1ee9b6fff5f5c"} Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.701595 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b75b4fdfd8b84f77d71adac078981fea327c1427ef7595a79ad1ee9b6fff5f5c" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.701678 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-trz62" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.707780 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qxnvw" event={"ID":"97616acb-173e-474b-a7f7-dcbc8bd2f0a6","Type":"ContainerDied","Data":"41247b4d2116d23de0f6844c1c844b52f5cc55e206103fef65d7f19a295a98ac"} Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.707911 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41247b4d2116d23de0f6844c1c844b52f5cc55e206103fef65d7f19a295a98ac" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.707796 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qxnvw" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.710513 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a471c86e-9e4a-4aba-848a-75aefa12c239","Type":"ContainerStarted","Data":"2ebbbaf00d9a369dd1ca5f46d2f165bdb5bf4264e991c134e7c3cd8817356d6f"} Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.714292 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerStarted","Data":"8fc3aad40377d595595388d78b9fe1fda9e17816a753e66b800bfd6a30dbc077"} Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.732941 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.732896846 podStartE2EDuration="5.732896846s" podCreationTimestamp="2026-01-21 14:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:38.728632903 +0000 UTC m=+1364.702981948" watchObservedRunningTime="2026-01-21 14:53:38.732896846 +0000 UTC m=+1364.707245891" Jan 21 14:53:38 crc kubenswrapper[4834]: I0121 14:53:38.767568 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.767530661 podStartE2EDuration="5.767530661s" podCreationTimestamp="2026-01-21 14:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:53:38.76398033 +0000 UTC m=+1364.738329375" watchObservedRunningTime="2026-01-21 14:53:38.767530661 +0000 UTC m=+1364.741879706" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.222007 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.356382 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.376768 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmt5w\" (UniqueName: \"kubernetes.io/projected/ee9e0a88-3e00-4c83-860f-25a0d932f773-kube-api-access-gmt5w\") pod \"ee9e0a88-3e00-4c83-860f-25a0d932f773\" (UID: \"ee9e0a88-3e00-4c83-860f-25a0d932f773\") " Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.376883 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4734873a-25bb-46e2-bb5b-68f9cd776682-operator-scripts\") pod \"4734873a-25bb-46e2-bb5b-68f9cd776682\" (UID: \"4734873a-25bb-46e2-bb5b-68f9cd776682\") " Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.377118 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee9e0a88-3e00-4c83-860f-25a0d932f773-operator-scripts\") pod \"ee9e0a88-3e00-4c83-860f-25a0d932f773\" (UID: \"ee9e0a88-3e00-4c83-860f-25a0d932f773\") " Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.377701 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwz7w\" (UniqueName: \"kubernetes.io/projected/4734873a-25bb-46e2-bb5b-68f9cd776682-kube-api-access-pwz7w\") pod \"4734873a-25bb-46e2-bb5b-68f9cd776682\" (UID: \"4734873a-25bb-46e2-bb5b-68f9cd776682\") " Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.386092 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee9e0a88-3e00-4c83-860f-25a0d932f773-kube-api-access-gmt5w" (OuterVolumeSpecName: "kube-api-access-gmt5w") pod "ee9e0a88-3e00-4c83-860f-25a0d932f773" (UID: "ee9e0a88-3e00-4c83-860f-25a0d932f773"). InnerVolumeSpecName "kube-api-access-gmt5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.386638 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee9e0a88-3e00-4c83-860f-25a0d932f773-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee9e0a88-3e00-4c83-860f-25a0d932f773" (UID: "ee9e0a88-3e00-4c83-860f-25a0d932f773"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.387711 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4734873a-25bb-46e2-bb5b-68f9cd776682-kube-api-access-pwz7w" (OuterVolumeSpecName: "kube-api-access-pwz7w") pod "4734873a-25bb-46e2-bb5b-68f9cd776682" (UID: "4734873a-25bb-46e2-bb5b-68f9cd776682"). InnerVolumeSpecName "kube-api-access-pwz7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.387898 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4734873a-25bb-46e2-bb5b-68f9cd776682-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4734873a-25bb-46e2-bb5b-68f9cd776682" (UID: "4734873a-25bb-46e2-bb5b-68f9cd776682"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.441223 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.462733 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.481240 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfx45\" (UniqueName: \"kubernetes.io/projected/123d466f-e93e-478a-8732-465c6099201b-kube-api-access-hfx45\") pod \"123d466f-e93e-478a-8732-465c6099201b\" (UID: \"123d466f-e93e-478a-8732-465c6099201b\") " Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.481386 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123d466f-e93e-478a-8732-465c6099201b-operator-scripts\") pod \"123d466f-e93e-478a-8732-465c6099201b\" (UID: \"123d466f-e93e-478a-8732-465c6099201b\") " Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.481500 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c79cc29-09cc-4649-815d-8e5ea52e05c9-operator-scripts\") pod \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\" (UID: \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\") " Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.481535 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjzts\" (UniqueName: \"kubernetes.io/projected/5c79cc29-09cc-4649-815d-8e5ea52e05c9-kube-api-access-vjzts\") pod \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\" (UID: \"5c79cc29-09cc-4649-815d-8e5ea52e05c9\") " Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.482095 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee9e0a88-3e00-4c83-860f-25a0d932f773-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.482114 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwz7w\" (UniqueName: \"kubernetes.io/projected/4734873a-25bb-46e2-bb5b-68f9cd776682-kube-api-access-pwz7w\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.482127 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmt5w\" (UniqueName: \"kubernetes.io/projected/ee9e0a88-3e00-4c83-860f-25a0d932f773-kube-api-access-gmt5w\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.482137 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4734873a-25bb-46e2-bb5b-68f9cd776682-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.482991 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123d466f-e93e-478a-8732-465c6099201b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "123d466f-e93e-478a-8732-465c6099201b" (UID: "123d466f-e93e-478a-8732-465c6099201b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.483674 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c79cc29-09cc-4649-815d-8e5ea52e05c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c79cc29-09cc-4649-815d-8e5ea52e05c9" (UID: "5c79cc29-09cc-4649-815d-8e5ea52e05c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.486023 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c79cc29-09cc-4649-815d-8e5ea52e05c9-kube-api-access-vjzts" (OuterVolumeSpecName: "kube-api-access-vjzts") pod "5c79cc29-09cc-4649-815d-8e5ea52e05c9" (UID: "5c79cc29-09cc-4649-815d-8e5ea52e05c9"). InnerVolumeSpecName "kube-api-access-vjzts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.488454 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123d466f-e93e-478a-8732-465c6099201b-kube-api-access-hfx45" (OuterVolumeSpecName: "kube-api-access-hfx45") pod "123d466f-e93e-478a-8732-465c6099201b" (UID: "123d466f-e93e-478a-8732-465c6099201b"). InnerVolumeSpecName "kube-api-access-hfx45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.583956 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c79cc29-09cc-4649-815d-8e5ea52e05c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.584003 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjzts\" (UniqueName: \"kubernetes.io/projected/5c79cc29-09cc-4649-815d-8e5ea52e05c9-kube-api-access-vjzts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.584021 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfx45\" (UniqueName: \"kubernetes.io/projected/123d466f-e93e-478a-8732-465c6099201b-kube-api-access-hfx45\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.584030 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123d466f-e93e-478a-8732-465c6099201b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.726180 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerStarted","Data":"0fcab19dc0839d5d700d7294ab9695cace4b04c8fc4e6c6335f213d62115dba9"} Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.727731 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1045-account-create-update-dd95j" event={"ID":"4734873a-25bb-46e2-bb5b-68f9cd776682","Type":"ContainerDied","Data":"2b608e1ebe6c4c33e9f545ccac2ada9a87dfcf1df24f9cf159035309ac2204fb"} Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.727760 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b608e1ebe6c4c33e9f545ccac2ada9a87dfcf1df24f9cf159035309ac2204fb" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.727822 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1045-account-create-update-dd95j" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.730969 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-73cb-account-create-update-qghqp" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.730924 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-73cb-account-create-update-qghqp" event={"ID":"123d466f-e93e-478a-8732-465c6099201b","Type":"ContainerDied","Data":"c35ee75887d01ed9868821d823ef22b70ba12b098ee27b5b9fb95ee6becdda7c"} Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.731242 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c35ee75887d01ed9868821d823ef22b70ba12b098ee27b5b9fb95ee6becdda7c" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.732907 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" event={"ID":"5c79cc29-09cc-4649-815d-8e5ea52e05c9","Type":"ContainerDied","Data":"5d95c5900e54f7ef25d51b50987bb7d4d4e33beb12abd0cd83bff8517381fea7"} Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.732950 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d95c5900e54f7ef25d51b50987bb7d4d4e33beb12abd0cd83bff8517381fea7" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.732987 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-12d4-account-create-update-qnb6b" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.734550 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sfvnk" Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.734546 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sfvnk" event={"ID":"ee9e0a88-3e00-4c83-860f-25a0d932f773","Type":"ContainerDied","Data":"089e0fbec33ebd3365fc37c9fffba9d0eeddd27510165d4aca2f0739ad5323ab"} Jan 21 14:53:39 crc kubenswrapper[4834]: I0121 14:53:39.734699 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="089e0fbec33ebd3365fc37c9fffba9d0eeddd27510165d4aca2f0739ad5323ab" Jan 21 14:53:40 crc kubenswrapper[4834]: I0121 14:53:40.749147 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9b7d697c-0f23-4ea5-b8eb-2735d019c579" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.156:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:53:41 crc kubenswrapper[4834]: I0121 14:53:41.401826 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 14:53:41 crc kubenswrapper[4834]: I0121 14:53:41.744482 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 14:53:41 crc kubenswrapper[4834]: I0121 14:53:41.763790 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerStarted","Data":"c1c69fb45f5e48b364566d0de56047195067fe382380dfe6516e115df0eb87f1"} Jan 21 14:53:41 crc kubenswrapper[4834]: I0121 14:53:41.763995 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="ceilometer-central-agent" containerID="cri-o://c620dd7ee3e093edd450650a3caaba7af66ba9591d39d946f01c02f5d3938eb2" gracePeriod=30 Jan 21 14:53:41 crc kubenswrapper[4834]: I0121 14:53:41.764048 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="sg-core" containerID="cri-o://0fcab19dc0839d5d700d7294ab9695cace4b04c8fc4e6c6335f213d62115dba9" gracePeriod=30 Jan 21 14:53:41 crc kubenswrapper[4834]: I0121 14:53:41.764096 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="ceilometer-notification-agent" containerID="cri-o://8fc3aad40377d595595388d78b9fe1fda9e17816a753e66b800bfd6a30dbc077" gracePeriod=30 Jan 21 14:53:41 crc kubenswrapper[4834]: I0121 14:53:41.764109 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="proxy-httpd" containerID="cri-o://c1c69fb45f5e48b364566d0de56047195067fe382380dfe6516e115df0eb87f1" gracePeriod=30 Jan 21 14:53:41 crc kubenswrapper[4834]: I0121 14:53:41.820440 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.120930547 podStartE2EDuration="8.820416945s" podCreationTimestamp="2026-01-21 14:53:33 +0000 UTC" firstStartedPulling="2026-01-21 14:53:35.594817496 +0000 UTC m=+1361.569166541" lastFinishedPulling="2026-01-21 14:53:40.294303894 +0000 UTC m=+1366.268652939" observedRunningTime="2026-01-21 14:53:41.816337758 +0000 UTC m=+1367.790686803" watchObservedRunningTime="2026-01-21 14:53:41.820416945 +0000 UTC m=+1367.794765990" Jan 21 14:53:42 crc kubenswrapper[4834]: I0121 14:53:42.778547 4834 generic.go:334] "Generic (PLEG): container finished" podID="70fcc671-cdfe-4459-aca3-c81262989590" containerID="c1c69fb45f5e48b364566d0de56047195067fe382380dfe6516e115df0eb87f1" exitCode=0 Jan 21 14:53:42 crc kubenswrapper[4834]: I0121 14:53:42.778979 4834 generic.go:334] "Generic (PLEG): container finished" podID="70fcc671-cdfe-4459-aca3-c81262989590" containerID="0fcab19dc0839d5d700d7294ab9695cace4b04c8fc4e6c6335f213d62115dba9" exitCode=2 Jan 21 14:53:42 crc kubenswrapper[4834]: I0121 14:53:42.778615 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerDied","Data":"c1c69fb45f5e48b364566d0de56047195067fe382380dfe6516e115df0eb87f1"} Jan 21 14:53:42 crc kubenswrapper[4834]: I0121 14:53:42.779030 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerDied","Data":"0fcab19dc0839d5d700d7294ab9695cace4b04c8fc4e6c6335f213d62115dba9"} Jan 21 14:53:42 crc kubenswrapper[4834]: I0121 14:53:42.779049 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerDied","Data":"8fc3aad40377d595595388d78b9fe1fda9e17816a753e66b800bfd6a30dbc077"} Jan 21 14:53:42 crc kubenswrapper[4834]: I0121 14:53:42.778992 4834 generic.go:334] "Generic (PLEG): container finished" podID="70fcc671-cdfe-4459-aca3-c81262989590" containerID="8fc3aad40377d595595388d78b9fe1fda9e17816a753e66b800bfd6a30dbc077" exitCode=0 Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.945292 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.945353 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.987286 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lkvk9"] Jan 21 14:53:43 crc kubenswrapper[4834]: E0121 14:53:43.988074 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9e0a88-3e00-4c83-860f-25a0d932f773" containerName="mariadb-database-create" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988092 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9e0a88-3e00-4c83-860f-25a0d932f773" containerName="mariadb-database-create" Jan 21 14:53:43 crc kubenswrapper[4834]: E0121 14:53:43.988120 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97616acb-173e-474b-a7f7-dcbc8bd2f0a6" containerName="mariadb-database-create" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988127 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="97616acb-173e-474b-a7f7-dcbc8bd2f0a6" containerName="mariadb-database-create" Jan 21 14:53:43 crc kubenswrapper[4834]: E0121 14:53:43.988143 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123d466f-e93e-478a-8732-465c6099201b" containerName="mariadb-account-create-update" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988149 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="123d466f-e93e-478a-8732-465c6099201b" containerName="mariadb-account-create-update" Jan 21 14:53:43 crc kubenswrapper[4834]: E0121 14:53:43.988169 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734873a-25bb-46e2-bb5b-68f9cd776682" containerName="mariadb-account-create-update" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988177 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734873a-25bb-46e2-bb5b-68f9cd776682" containerName="mariadb-account-create-update" Jan 21 14:53:43 crc kubenswrapper[4834]: E0121 14:53:43.988190 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5b0fd2-ece5-4adc-9b6b-25103f997228" containerName="mariadb-database-create" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988196 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5b0fd2-ece5-4adc-9b6b-25103f997228" containerName="mariadb-database-create" Jan 21 14:53:43 crc kubenswrapper[4834]: E0121 14:53:43.988206 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c79cc29-09cc-4649-815d-8e5ea52e05c9" containerName="mariadb-account-create-update" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988211 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c79cc29-09cc-4649-815d-8e5ea52e05c9" containerName="mariadb-account-create-update" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988426 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="123d466f-e93e-478a-8732-465c6099201b" containerName="mariadb-account-create-update" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988446 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="97616acb-173e-474b-a7f7-dcbc8bd2f0a6" containerName="mariadb-database-create" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988453 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c79cc29-09cc-4649-815d-8e5ea52e05c9" containerName="mariadb-account-create-update" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988466 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734873a-25bb-46e2-bb5b-68f9cd776682" containerName="mariadb-account-create-update" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988476 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee9e0a88-3e00-4c83-860f-25a0d932f773" containerName="mariadb-database-create" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.988486 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5b0fd2-ece5-4adc-9b6b-25103f997228" containerName="mariadb-database-create" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.989291 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.989479 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.995311 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.995786 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.995823 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lkvk9"] Jan 21 14:53:43 crc kubenswrapper[4834]: I0121 14:53:43.996094 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-c4pbl" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.018465 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.093975 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.094064 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.096667 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.096766 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-scripts\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.096808 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mshzl\" (UniqueName: \"kubernetes.io/projected/8d2f6b9b-bf7b-4026-9269-7f77233ec402-kube-api-access-mshzl\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.097035 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-config-data\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.133813 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.142657 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.198309 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-config-data\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.198424 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.198493 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-scripts\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.198524 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mshzl\" (UniqueName: \"kubernetes.io/projected/8d2f6b9b-bf7b-4026-9269-7f77233ec402-kube-api-access-mshzl\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.205596 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-scripts\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.206010 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-config-data\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.218045 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.220506 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mshzl\" (UniqueName: \"kubernetes.io/projected/8d2f6b9b-bf7b-4026-9269-7f77233ec402-kube-api-access-mshzl\") pod \"nova-cell0-conductor-db-sync-lkvk9\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.317014 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.802112 4834 generic.go:334] "Generic (PLEG): container finished" podID="70fcc671-cdfe-4459-aca3-c81262989590" containerID="c620dd7ee3e093edd450650a3caaba7af66ba9591d39d946f01c02f5d3938eb2" exitCode=0 Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.804060 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerDied","Data":"c620dd7ee3e093edd450650a3caaba7af66ba9591d39d946f01c02f5d3938eb2"} Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.804516 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.804580 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.804600 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.804636 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.933996 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lkvk9"] Jan 21 14:53:44 crc kubenswrapper[4834]: I0121 14:53:44.974222 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.120245 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxvw9\" (UniqueName: \"kubernetes.io/projected/70fcc671-cdfe-4459-aca3-c81262989590-kube-api-access-bxvw9\") pod \"70fcc671-cdfe-4459-aca3-c81262989590\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.120392 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-scripts\") pod \"70fcc671-cdfe-4459-aca3-c81262989590\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.120434 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-sg-core-conf-yaml\") pod \"70fcc671-cdfe-4459-aca3-c81262989590\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.120483 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-config-data\") pod \"70fcc671-cdfe-4459-aca3-c81262989590\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.120522 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-combined-ca-bundle\") pod \"70fcc671-cdfe-4459-aca3-c81262989590\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.120602 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-log-httpd\") pod \"70fcc671-cdfe-4459-aca3-c81262989590\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.121156 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "70fcc671-cdfe-4459-aca3-c81262989590" (UID: "70fcc671-cdfe-4459-aca3-c81262989590"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.122154 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-run-httpd\") pod \"70fcc671-cdfe-4459-aca3-c81262989590\" (UID: \"70fcc671-cdfe-4459-aca3-c81262989590\") " Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.122705 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.129399 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "70fcc671-cdfe-4459-aca3-c81262989590" (UID: "70fcc671-cdfe-4459-aca3-c81262989590"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.132337 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70fcc671-cdfe-4459-aca3-c81262989590-kube-api-access-bxvw9" (OuterVolumeSpecName: "kube-api-access-bxvw9") pod "70fcc671-cdfe-4459-aca3-c81262989590" (UID: "70fcc671-cdfe-4459-aca3-c81262989590"). InnerVolumeSpecName "kube-api-access-bxvw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.137589 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-scripts" (OuterVolumeSpecName: "scripts") pod "70fcc671-cdfe-4459-aca3-c81262989590" (UID: "70fcc671-cdfe-4459-aca3-c81262989590"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.172174 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "70fcc671-cdfe-4459-aca3-c81262989590" (UID: "70fcc671-cdfe-4459-aca3-c81262989590"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.224374 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70fcc671-cdfe-4459-aca3-c81262989590-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.224710 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxvw9\" (UniqueName: \"kubernetes.io/projected/70fcc671-cdfe-4459-aca3-c81262989590-kube-api-access-bxvw9\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.224722 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.224730 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.243002 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70fcc671-cdfe-4459-aca3-c81262989590" (UID: "70fcc671-cdfe-4459-aca3-c81262989590"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.267173 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-config-data" (OuterVolumeSpecName: "config-data") pod "70fcc671-cdfe-4459-aca3-c81262989590" (UID: "70fcc671-cdfe-4459-aca3-c81262989590"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.326104 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.326151 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70fcc671-cdfe-4459-aca3-c81262989590-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.824112 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"70fcc671-cdfe-4459-aca3-c81262989590","Type":"ContainerDied","Data":"ea9e4a9e4aed18ff5c0976c23e76566b00d63ca342a1f36a32348efc094a43cd"} Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.824630 4834 scope.go:117] "RemoveContainer" containerID="c1c69fb45f5e48b364566d0de56047195067fe382380dfe6516e115df0eb87f1" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.824187 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.828134 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lkvk9" event={"ID":"8d2f6b9b-bf7b-4026-9269-7f77233ec402","Type":"ContainerStarted","Data":"b4379dce763e5351b07bde6c8e78b26be6c591321dfd5d98bcea6d840d596cdf"} Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.887169 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.897972 4834 scope.go:117] "RemoveContainer" containerID="0fcab19dc0839d5d700d7294ab9695cace4b04c8fc4e6c6335f213d62115dba9" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.906729 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.920590 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:45 crc kubenswrapper[4834]: E0121 14:53:45.921302 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="sg-core" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.921334 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="sg-core" Jan 21 14:53:45 crc kubenswrapper[4834]: E0121 14:53:45.921350 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="proxy-httpd" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.921372 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="proxy-httpd" Jan 21 14:53:45 crc kubenswrapper[4834]: E0121 14:53:45.921383 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="ceilometer-notification-agent" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.921393 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="ceilometer-notification-agent" Jan 21 14:53:45 crc kubenswrapper[4834]: E0121 14:53:45.921441 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="ceilometer-central-agent" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.921450 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="ceilometer-central-agent" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.921693 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="ceilometer-central-agent" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.921717 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="proxy-httpd" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.921745 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="ceilometer-notification-agent" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.921763 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="70fcc671-cdfe-4459-aca3-c81262989590" containerName="sg-core" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.924210 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.927149 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.930040 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.969717 4834 scope.go:117] "RemoveContainer" containerID="8fc3aad40377d595595388d78b9fe1fda9e17816a753e66b800bfd6a30dbc077" Jan 21 14:53:45 crc kubenswrapper[4834]: I0121 14:53:45.974019 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.010064 4834 scope.go:117] "RemoveContainer" containerID="c620dd7ee3e093edd450650a3caaba7af66ba9591d39d946f01c02f5d3938eb2" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.064672 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-log-httpd\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.064976 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-scripts\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.065019 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-config-data\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.065185 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.065291 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.065447 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-run-httpd\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.065542 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv8tq\" (UniqueName: \"kubernetes.io/projected/6289cb84-1bca-4615-8b65-3c1ac95a7936-kube-api-access-sv8tq\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.168248 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-run-httpd\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.168610 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv8tq\" (UniqueName: \"kubernetes.io/projected/6289cb84-1bca-4615-8b65-3c1ac95a7936-kube-api-access-sv8tq\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.168816 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-run-httpd\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.168820 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-log-httpd\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.168921 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-scripts\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.169036 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-config-data\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.169125 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.169163 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.170486 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-log-httpd\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.179816 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.180141 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-scripts\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.180234 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.187561 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv8tq\" (UniqueName: \"kubernetes.io/projected/6289cb84-1bca-4615-8b65-3c1ac95a7936-kube-api-access-sv8tq\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.191666 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-config-data\") pod \"ceilometer-0\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.264048 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.342372 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70fcc671-cdfe-4459-aca3-c81262989590" path="/var/lib/kubelet/pods/70fcc671-cdfe-4459-aca3-c81262989590/volumes" Jan 21 14:53:46 crc kubenswrapper[4834]: I0121 14:53:46.858906 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:53:47 crc kubenswrapper[4834]: I0121 14:53:47.391843 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:53:47 crc kubenswrapper[4834]: I0121 14:53:47.392522 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:53:47 crc kubenswrapper[4834]: I0121 14:53:47.407358 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:47 crc kubenswrapper[4834]: I0121 14:53:47.407543 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 14:53:47 crc kubenswrapper[4834]: I0121 14:53:47.529960 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 14:53:47 crc kubenswrapper[4834]: I0121 14:53:47.549987 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 14:53:47 crc kubenswrapper[4834]: I0121 14:53:47.887771 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerStarted","Data":"cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f"} Jan 21 14:53:47 crc kubenswrapper[4834]: I0121 14:53:47.888295 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerStarted","Data":"15672d8b338c631e2185078f4d4e03dc70d63681ef86c6fafebd2f5c3508d7fb"} Jan 21 14:53:48 crc kubenswrapper[4834]: I0121 14:53:48.901052 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerStarted","Data":"d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e"} Jan 21 14:53:49 crc kubenswrapper[4834]: I0121 14:53:49.914753 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerStarted","Data":"75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c"} Jan 21 14:53:54 crc kubenswrapper[4834]: I0121 14:53:54.996343 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lkvk9" event={"ID":"8d2f6b9b-bf7b-4026-9269-7f77233ec402","Type":"ContainerStarted","Data":"f79820e45fbe2f14701ead04fcf982473e4bd920f4dc695e1566caa2639d510f"} Jan 21 14:53:55 crc kubenswrapper[4834]: I0121 14:53:55.002060 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerStarted","Data":"e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb"} Jan 21 14:53:55 crc kubenswrapper[4834]: I0121 14:53:55.003582 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:53:55 crc kubenswrapper[4834]: I0121 14:53:55.027049 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-lkvk9" podStartSLOduration=2.540204998 podStartE2EDuration="12.027018891s" podCreationTimestamp="2026-01-21 14:53:43 +0000 UTC" firstStartedPulling="2026-01-21 14:53:44.951541658 +0000 UTC m=+1370.925890703" lastFinishedPulling="2026-01-21 14:53:54.438355551 +0000 UTC m=+1380.412704596" observedRunningTime="2026-01-21 14:53:55.019501558 +0000 UTC m=+1380.993850603" watchObservedRunningTime="2026-01-21 14:53:55.027018891 +0000 UTC m=+1381.001367936" Jan 21 14:53:55 crc kubenswrapper[4834]: I0121 14:53:55.045323 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.587341881 podStartE2EDuration="10.045292069s" podCreationTimestamp="2026-01-21 14:53:45 +0000 UTC" firstStartedPulling="2026-01-21 14:53:46.904477955 +0000 UTC m=+1372.878827000" lastFinishedPulling="2026-01-21 14:53:54.362428143 +0000 UTC m=+1380.336777188" observedRunningTime="2026-01-21 14:53:55.043610446 +0000 UTC m=+1381.017959511" watchObservedRunningTime="2026-01-21 14:53:55.045292069 +0000 UTC m=+1381.019641114" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.338754 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-brn6m"] Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.344090 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.373827 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brn6m"] Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.490356 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjrq\" (UniqueName: \"kubernetes.io/projected/cab31199-772f-450d-8008-26b3a39443ed-kube-api-access-hrjrq\") pod \"redhat-operators-brn6m\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.490647 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-catalog-content\") pod \"redhat-operators-brn6m\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.490869 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-utilities\") pod \"redhat-operators-brn6m\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.593060 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrjrq\" (UniqueName: \"kubernetes.io/projected/cab31199-772f-450d-8008-26b3a39443ed-kube-api-access-hrjrq\") pod \"redhat-operators-brn6m\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.593146 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-catalog-content\") pod \"redhat-operators-brn6m\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.593188 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-utilities\") pod \"redhat-operators-brn6m\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.593607 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-catalog-content\") pod \"redhat-operators-brn6m\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.593734 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-utilities\") pod \"redhat-operators-brn6m\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.617113 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrjrq\" (UniqueName: \"kubernetes.io/projected/cab31199-772f-450d-8008-26b3a39443ed-kube-api-access-hrjrq\") pod \"redhat-operators-brn6m\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:57 crc kubenswrapper[4834]: I0121 14:53:57.677231 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:53:58 crc kubenswrapper[4834]: I0121 14:53:58.209085 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brn6m"] Jan 21 14:53:58 crc kubenswrapper[4834]: W0121 14:53:58.212850 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcab31199_772f_450d_8008_26b3a39443ed.slice/crio-251bdf69667f447c8202c43b648e541d2f8f58ecf22d2d82dfd9119f8a66b8ec WatchSource:0}: Error finding container 251bdf69667f447c8202c43b648e541d2f8f58ecf22d2d82dfd9119f8a66b8ec: Status 404 returned error can't find the container with id 251bdf69667f447c8202c43b648e541d2f8f58ecf22d2d82dfd9119f8a66b8ec Jan 21 14:53:59 crc kubenswrapper[4834]: I0121 14:53:59.043683 4834 generic.go:334] "Generic (PLEG): container finished" podID="cab31199-772f-450d-8008-26b3a39443ed" containerID="3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190" exitCode=0 Jan 21 14:53:59 crc kubenswrapper[4834]: I0121 14:53:59.043779 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brn6m" event={"ID":"cab31199-772f-450d-8008-26b3a39443ed","Type":"ContainerDied","Data":"3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190"} Jan 21 14:53:59 crc kubenswrapper[4834]: I0121 14:53:59.044200 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brn6m" event={"ID":"cab31199-772f-450d-8008-26b3a39443ed","Type":"ContainerStarted","Data":"251bdf69667f447c8202c43b648e541d2f8f58ecf22d2d82dfd9119f8a66b8ec"} Jan 21 14:54:02 crc kubenswrapper[4834]: I0121 14:54:02.090327 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brn6m" event={"ID":"cab31199-772f-450d-8008-26b3a39443ed","Type":"ContainerStarted","Data":"d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b"} Jan 21 14:54:03 crc kubenswrapper[4834]: I0121 14:54:03.103841 4834 generic.go:334] "Generic (PLEG): container finished" podID="cab31199-772f-450d-8008-26b3a39443ed" containerID="d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b" exitCode=0 Jan 21 14:54:03 crc kubenswrapper[4834]: I0121 14:54:03.103907 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brn6m" event={"ID":"cab31199-772f-450d-8008-26b3a39443ed","Type":"ContainerDied","Data":"d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b"} Jan 21 14:54:07 crc kubenswrapper[4834]: I0121 14:54:07.188639 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brn6m" event={"ID":"cab31199-772f-450d-8008-26b3a39443ed","Type":"ContainerStarted","Data":"3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093"} Jan 21 14:54:07 crc kubenswrapper[4834]: I0121 14:54:07.224312 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-brn6m" podStartSLOduration=3.228410813 podStartE2EDuration="10.224286123s" podCreationTimestamp="2026-01-21 14:53:57 +0000 UTC" firstStartedPulling="2026-01-21 14:53:59.04743207 +0000 UTC m=+1385.021781115" lastFinishedPulling="2026-01-21 14:54:06.04330738 +0000 UTC m=+1392.017656425" observedRunningTime="2026-01-21 14:54:07.210838835 +0000 UTC m=+1393.185187890" watchObservedRunningTime="2026-01-21 14:54:07.224286123 +0000 UTC m=+1393.198635168" Jan 21 14:54:07 crc kubenswrapper[4834]: I0121 14:54:07.678867 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:54:07 crc kubenswrapper[4834]: I0121 14:54:07.678950 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:54:08 crc kubenswrapper[4834]: I0121 14:54:08.611787 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:08 crc kubenswrapper[4834]: I0121 14:54:08.612213 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="ceilometer-central-agent" containerID="cri-o://cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f" gracePeriod=30 Jan 21 14:54:08 crc kubenswrapper[4834]: I0121 14:54:08.612301 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="proxy-httpd" containerID="cri-o://e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb" gracePeriod=30 Jan 21 14:54:08 crc kubenswrapper[4834]: I0121 14:54:08.612415 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="ceilometer-notification-agent" containerID="cri-o://d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e" gracePeriod=30 Jan 21 14:54:08 crc kubenswrapper[4834]: I0121 14:54:08.612435 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="sg-core" containerID="cri-o://75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c" gracePeriod=30 Jan 21 14:54:08 crc kubenswrapper[4834]: I0121 14:54:08.618248 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 14:54:08 crc kubenswrapper[4834]: I0121 14:54:08.753138 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-brn6m" podUID="cab31199-772f-450d-8008-26b3a39443ed" containerName="registry-server" probeResult="failure" output=< Jan 21 14:54:08 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 14:54:08 crc kubenswrapper[4834]: > Jan 21 14:54:09 crc kubenswrapper[4834]: I0121 14:54:09.209743 4834 generic.go:334] "Generic (PLEG): container finished" podID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerID="e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb" exitCode=0 Jan 21 14:54:09 crc kubenswrapper[4834]: I0121 14:54:09.209785 4834 generic.go:334] "Generic (PLEG): container finished" podID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerID="75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c" exitCode=2 Jan 21 14:54:09 crc kubenswrapper[4834]: I0121 14:54:09.209799 4834 generic.go:334] "Generic (PLEG): container finished" podID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerID="cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f" exitCode=0 Jan 21 14:54:09 crc kubenswrapper[4834]: I0121 14:54:09.210181 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerDied","Data":"e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb"} Jan 21 14:54:09 crc kubenswrapper[4834]: I0121 14:54:09.210302 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerDied","Data":"75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c"} Jan 21 14:54:09 crc kubenswrapper[4834]: I0121 14:54:09.210403 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerDied","Data":"cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f"} Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.163260 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.242896 4834 generic.go:334] "Generic (PLEG): container finished" podID="8d2f6b9b-bf7b-4026-9269-7f77233ec402" containerID="f79820e45fbe2f14701ead04fcf982473e4bd920f4dc695e1566caa2639d510f" exitCode=0 Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.242974 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lkvk9" event={"ID":"8d2f6b9b-bf7b-4026-9269-7f77233ec402","Type":"ContainerDied","Data":"f79820e45fbe2f14701ead04fcf982473e4bd920f4dc695e1566caa2639d510f"} Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.247291 4834 generic.go:334] "Generic (PLEG): container finished" podID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerID="d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e" exitCode=0 Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.247338 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerDied","Data":"d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e"} Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.247379 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6289cb84-1bca-4615-8b65-3c1ac95a7936","Type":"ContainerDied","Data":"15672d8b338c631e2185078f4d4e03dc70d63681ef86c6fafebd2f5c3508d7fb"} Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.247385 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.247408 4834 scope.go:117] "RemoveContainer" containerID="e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.269060 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-combined-ca-bundle\") pod \"6289cb84-1bca-4615-8b65-3c1ac95a7936\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.269137 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-run-httpd\") pod \"6289cb84-1bca-4615-8b65-3c1ac95a7936\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.269186 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv8tq\" (UniqueName: \"kubernetes.io/projected/6289cb84-1bca-4615-8b65-3c1ac95a7936-kube-api-access-sv8tq\") pod \"6289cb84-1bca-4615-8b65-3c1ac95a7936\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.269222 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-sg-core-conf-yaml\") pod \"6289cb84-1bca-4615-8b65-3c1ac95a7936\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.269383 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-config-data\") pod \"6289cb84-1bca-4615-8b65-3c1ac95a7936\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.269503 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-scripts\") pod \"6289cb84-1bca-4615-8b65-3c1ac95a7936\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.269607 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-log-httpd\") pod \"6289cb84-1bca-4615-8b65-3c1ac95a7936\" (UID: \"6289cb84-1bca-4615-8b65-3c1ac95a7936\") " Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.269863 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6289cb84-1bca-4615-8b65-3c1ac95a7936" (UID: "6289cb84-1bca-4615-8b65-3c1ac95a7936"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.270243 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.271187 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6289cb84-1bca-4615-8b65-3c1ac95a7936" (UID: "6289cb84-1bca-4615-8b65-3c1ac95a7936"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.278268 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6289cb84-1bca-4615-8b65-3c1ac95a7936-kube-api-access-sv8tq" (OuterVolumeSpecName: "kube-api-access-sv8tq") pod "6289cb84-1bca-4615-8b65-3c1ac95a7936" (UID: "6289cb84-1bca-4615-8b65-3c1ac95a7936"). InnerVolumeSpecName "kube-api-access-sv8tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.279115 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-scripts" (OuterVolumeSpecName: "scripts") pod "6289cb84-1bca-4615-8b65-3c1ac95a7936" (UID: "6289cb84-1bca-4615-8b65-3c1ac95a7936"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.279349 4834 scope.go:117] "RemoveContainer" containerID="75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.309681 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6289cb84-1bca-4615-8b65-3c1ac95a7936" (UID: "6289cb84-1bca-4615-8b65-3c1ac95a7936"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.381695 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.381734 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6289cb84-1bca-4615-8b65-3c1ac95a7936-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.381747 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv8tq\" (UniqueName: \"kubernetes.io/projected/6289cb84-1bca-4615-8b65-3c1ac95a7936-kube-api-access-sv8tq\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.381765 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.401216 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6289cb84-1bca-4615-8b65-3c1ac95a7936" (UID: "6289cb84-1bca-4615-8b65-3c1ac95a7936"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.466658 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-config-data" (OuterVolumeSpecName: "config-data") pod "6289cb84-1bca-4615-8b65-3c1ac95a7936" (UID: "6289cb84-1bca-4615-8b65-3c1ac95a7936"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.484941 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.485026 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6289cb84-1bca-4615-8b65-3c1ac95a7936-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.556412 4834 scope.go:117] "RemoveContainer" containerID="d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.589320 4834 scope.go:117] "RemoveContainer" containerID="cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.591519 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.602329 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.615514 4834 scope.go:117] "RemoveContainer" containerID="e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb" Jan 21 14:54:12 crc kubenswrapper[4834]: E0121 14:54:12.616292 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb\": container with ID starting with e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb not found: ID does not exist" containerID="e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.616350 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb"} err="failed to get container status \"e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb\": rpc error: code = NotFound desc = could not find container \"e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb\": container with ID starting with e9aa01332700879649cefffc8342c179e2f7957ea1b55a5be3519151226dbddb not found: ID does not exist" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.616387 4834 scope.go:117] "RemoveContainer" containerID="75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c" Jan 21 14:54:12 crc kubenswrapper[4834]: E0121 14:54:12.616835 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c\": container with ID starting with 75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c not found: ID does not exist" containerID="75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.617004 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c"} err="failed to get container status \"75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c\": rpc error: code = NotFound desc = could not find container \"75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c\": container with ID starting with 75addb8dab02f853d78b7a725a9fecb682b3a44de66a8320cd0c5321474d558c not found: ID does not exist" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.617137 4834 scope.go:117] "RemoveContainer" containerID="d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e" Jan 21 14:54:12 crc kubenswrapper[4834]: E0121 14:54:12.618241 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e\": container with ID starting with d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e not found: ID does not exist" containerID="d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.618302 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e"} err="failed to get container status \"d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e\": rpc error: code = NotFound desc = could not find container \"d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e\": container with ID starting with d22b818b4697109ab3bbcb3e9c87be343377ab22ef2e47695017abdfb85a989e not found: ID does not exist" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.618344 4834 scope.go:117] "RemoveContainer" containerID="cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f" Jan 21 14:54:12 crc kubenswrapper[4834]: E0121 14:54:12.619114 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f\": container with ID starting with cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f not found: ID does not exist" containerID="cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.619139 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f"} err="failed to get container status \"cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f\": rpc error: code = NotFound desc = could not find container \"cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f\": container with ID starting with cd563a3abd77a6a58a87542ff74f155f97dd8dadcac4f89c21a17f0417faee8f not found: ID does not exist" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.633905 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:12 crc kubenswrapper[4834]: E0121 14:54:12.634607 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="ceilometer-notification-agent" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.634632 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="ceilometer-notification-agent" Jan 21 14:54:12 crc kubenswrapper[4834]: E0121 14:54:12.634655 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="proxy-httpd" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.634664 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="proxy-httpd" Jan 21 14:54:12 crc kubenswrapper[4834]: E0121 14:54:12.634698 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="ceilometer-central-agent" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.634704 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="ceilometer-central-agent" Jan 21 14:54:12 crc kubenswrapper[4834]: E0121 14:54:12.634720 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="sg-core" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.634728 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="sg-core" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.634894 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="ceilometer-notification-agent" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.634914 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="sg-core" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.634947 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="proxy-httpd" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.634960 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" containerName="ceilometer-central-agent" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.636738 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.639876 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.640489 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.667327 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.793828 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-config-data\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.793881 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.793903 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-scripts\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.793945 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-run-httpd\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.794118 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-log-httpd\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.794151 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.794312 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4ljh\" (UniqueName: \"kubernetes.io/projected/6f5031ac-2f93-45be-941a-98044d9b832a-kube-api-access-g4ljh\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.896349 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-config-data\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.897286 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.897330 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-scripts\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.897359 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-run-httpd\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.897413 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-log-httpd\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.897438 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.897485 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4ljh\" (UniqueName: \"kubernetes.io/projected/6f5031ac-2f93-45be-941a-98044d9b832a-kube-api-access-g4ljh\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.898179 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-run-httpd\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.898303 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-log-httpd\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.903026 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.903346 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.903355 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-scripts\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.903443 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-config-data\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.924970 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4ljh\" (UniqueName: \"kubernetes.io/projected/6f5031ac-2f93-45be-941a-98044d9b832a-kube-api-access-g4ljh\") pod \"ceilometer-0\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " pod="openstack/ceilometer-0" Jan 21 14:54:12 crc kubenswrapper[4834]: I0121 14:54:12.966758 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.157790 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.158671 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="61736716-9721-48ac-9318-c2ceca59af62" containerName="kube-state-metrics" containerID="cri-o://d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236" gracePeriod=30 Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.458308 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:13 crc kubenswrapper[4834]: W0121 14:54:13.463304 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f5031ac_2f93_45be_941a_98044d9b832a.slice/crio-3aca24d077974c60162292f12b5b1bfdce2cab68e633c56999c4b78c5356646c WatchSource:0}: Error finding container 3aca24d077974c60162292f12b5b1bfdce2cab68e633c56999c4b78c5356646c: Status 404 returned error can't find the container with id 3aca24d077974c60162292f12b5b1bfdce2cab68e633c56999c4b78c5356646c Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.688206 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.695003 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.719361 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-combined-ca-bundle\") pod \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.719416 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-config-data\") pod \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.719573 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-scripts\") pod \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.719615 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxsc5\" (UniqueName: \"kubernetes.io/projected/61736716-9721-48ac-9318-c2ceca59af62-kube-api-access-vxsc5\") pod \"61736716-9721-48ac-9318-c2ceca59af62\" (UID: \"61736716-9721-48ac-9318-c2ceca59af62\") " Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.719675 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mshzl\" (UniqueName: \"kubernetes.io/projected/8d2f6b9b-bf7b-4026-9269-7f77233ec402-kube-api-access-mshzl\") pod \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\" (UID: \"8d2f6b9b-bf7b-4026-9269-7f77233ec402\") " Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.739803 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2f6b9b-bf7b-4026-9269-7f77233ec402-kube-api-access-mshzl" (OuterVolumeSpecName: "kube-api-access-mshzl") pod "8d2f6b9b-bf7b-4026-9269-7f77233ec402" (UID: "8d2f6b9b-bf7b-4026-9269-7f77233ec402"). InnerVolumeSpecName "kube-api-access-mshzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.745721 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61736716-9721-48ac-9318-c2ceca59af62-kube-api-access-vxsc5" (OuterVolumeSpecName: "kube-api-access-vxsc5") pod "61736716-9721-48ac-9318-c2ceca59af62" (UID: "61736716-9721-48ac-9318-c2ceca59af62"). InnerVolumeSpecName "kube-api-access-vxsc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.747408 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-scripts" (OuterVolumeSpecName: "scripts") pod "8d2f6b9b-bf7b-4026-9269-7f77233ec402" (UID: "8d2f6b9b-bf7b-4026-9269-7f77233ec402"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.774217 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d2f6b9b-bf7b-4026-9269-7f77233ec402" (UID: "8d2f6b9b-bf7b-4026-9269-7f77233ec402"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.812845 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-config-data" (OuterVolumeSpecName: "config-data") pod "8d2f6b9b-bf7b-4026-9269-7f77233ec402" (UID: "8d2f6b9b-bf7b-4026-9269-7f77233ec402"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.821712 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mshzl\" (UniqueName: \"kubernetes.io/projected/8d2f6b9b-bf7b-4026-9269-7f77233ec402-kube-api-access-mshzl\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.821840 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.821900 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.821980 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d2f6b9b-bf7b-4026-9269-7f77233ec402-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:13 crc kubenswrapper[4834]: I0121 14:54:13.822037 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxsc5\" (UniqueName: \"kubernetes.io/projected/61736716-9721-48ac-9318-c2ceca59af62-kube-api-access-vxsc5\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.283684 4834 generic.go:334] "Generic (PLEG): container finished" podID="61736716-9721-48ac-9318-c2ceca59af62" containerID="d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236" exitCode=2 Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.284322 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"61736716-9721-48ac-9318-c2ceca59af62","Type":"ContainerDied","Data":"d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236"} Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.284365 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"61736716-9721-48ac-9318-c2ceca59af62","Type":"ContainerDied","Data":"7bfe46248fca29485426d66d22f71f52764ed9a558e53bb794f157f83633a42e"} Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.284389 4834 scope.go:117] "RemoveContainer" containerID="d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.284529 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.295227 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lkvk9" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.295454 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lkvk9" event={"ID":"8d2f6b9b-bf7b-4026-9269-7f77233ec402","Type":"ContainerDied","Data":"b4379dce763e5351b07bde6c8e78b26be6c591321dfd5d98bcea6d840d596cdf"} Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.295575 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4379dce763e5351b07bde6c8e78b26be6c591321dfd5d98bcea6d840d596cdf" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.300766 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerStarted","Data":"0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76"} Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.300827 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerStarted","Data":"3aca24d077974c60162292f12b5b1bfdce2cab68e633c56999c4b78c5356646c"} Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.356873 4834 scope.go:117] "RemoveContainer" containerID="d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236" Jan 21 14:54:14 crc kubenswrapper[4834]: E0121 14:54:14.365758 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236\": container with ID starting with d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236 not found: ID does not exist" containerID="d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.365840 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236"} err="failed to get container status \"d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236\": rpc error: code = NotFound desc = could not find container \"d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236\": container with ID starting with d114fe9adc12fd65b3f82aa790de8085e069eb5bf5f24ba49ac9c3682bcd3236 not found: ID does not exist" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.381515 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6289cb84-1bca-4615-8b65-3c1ac95a7936" path="/var/lib/kubelet/pods/6289cb84-1bca-4615-8b65-3c1ac95a7936/volumes" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.407181 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.421031 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.430019 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:54:14 crc kubenswrapper[4834]: E0121 14:54:14.430533 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61736716-9721-48ac-9318-c2ceca59af62" containerName="kube-state-metrics" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.430556 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="61736716-9721-48ac-9318-c2ceca59af62" containerName="kube-state-metrics" Jan 21 14:54:14 crc kubenswrapper[4834]: E0121 14:54:14.430597 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2f6b9b-bf7b-4026-9269-7f77233ec402" containerName="nova-cell0-conductor-db-sync" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.430605 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2f6b9b-bf7b-4026-9269-7f77233ec402" containerName="nova-cell0-conductor-db-sync" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.430774 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2f6b9b-bf7b-4026-9269-7f77233ec402" containerName="nova-cell0-conductor-db-sync" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.430812 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="61736716-9721-48ac-9318-c2ceca59af62" containerName="kube-state-metrics" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.431498 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.435828 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-c4pbl" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.436065 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.440275 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.442253 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.446017 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.446276 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.457940 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.489603 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.547425 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tc8g\" (UniqueName: \"kubernetes.io/projected/61306868-aa06-4574-a568-b36b22fd6db6-kube-api-access-2tc8g\") pod \"nova-cell0-conductor-0\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.547589 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2kf\" (UniqueName: \"kubernetes.io/projected/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-api-access-hx2kf\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.547678 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.547712 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.547772 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.547810 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.547970 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.650162 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2kf\" (UniqueName: \"kubernetes.io/projected/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-api-access-hx2kf\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.650277 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.650307 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.650365 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.650402 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.650466 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.650487 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tc8g\" (UniqueName: \"kubernetes.io/projected/61306868-aa06-4574-a568-b36b22fd6db6-kube-api-access-2tc8g\") pod \"nova-cell0-conductor-0\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.662663 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.663689 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.664398 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.666141 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.670708 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.671180 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tc8g\" (UniqueName: \"kubernetes.io/projected/61306868-aa06-4574-a568-b36b22fd6db6-kube-api-access-2tc8g\") pod \"nova-cell0-conductor-0\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.672694 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2kf\" (UniqueName: \"kubernetes.io/projected/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-api-access-hx2kf\") pod \"kube-state-metrics-0\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " pod="openstack/kube-state-metrics-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.774825 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:14 crc kubenswrapper[4834]: I0121 14:54:14.791517 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:54:15 crc kubenswrapper[4834]: I0121 14:54:15.334749 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerStarted","Data":"1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0"} Jan 21 14:54:15 crc kubenswrapper[4834]: I0121 14:54:15.344997 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:54:15 crc kubenswrapper[4834]: W0121 14:54:15.350168 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61306868_aa06_4574_a568_b36b22fd6db6.slice/crio-12be6275f460c004869fbe90cd5e4bac3e3aef36d48b64cd2132eea8be42726c WatchSource:0}: Error finding container 12be6275f460c004869fbe90cd5e4bac3e3aef36d48b64cd2132eea8be42726c: Status 404 returned error can't find the container with id 12be6275f460c004869fbe90cd5e4bac3e3aef36d48b64cd2132eea8be42726c Jan 21 14:54:15 crc kubenswrapper[4834]: W0121 14:54:15.417687 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3c79dc8_1d60_46f6_add1_1783486562f2.slice/crio-76a2efb1c84b29174bffc2517c61f40378ff43d07fa85b8a098fb0ae076396ba WatchSource:0}: Error finding container 76a2efb1c84b29174bffc2517c61f40378ff43d07fa85b8a098fb0ae076396ba: Status 404 returned error can't find the container with id 76a2efb1c84b29174bffc2517c61f40378ff43d07fa85b8a098fb0ae076396ba Jan 21 14:54:15 crc kubenswrapper[4834]: I0121 14:54:15.429865 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:54:15 crc kubenswrapper[4834]: I0121 14:54:15.790703 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.339610 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61736716-9721-48ac-9318-c2ceca59af62" path="/var/lib/kubelet/pods/61736716-9721-48ac-9318-c2ceca59af62/volumes" Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.349905 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c3c79dc8-1d60-46f6-add1-1783486562f2","Type":"ContainerStarted","Data":"741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683"} Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.349998 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c3c79dc8-1d60-46f6-add1-1783486562f2","Type":"ContainerStarted","Data":"76a2efb1c84b29174bffc2517c61f40378ff43d07fa85b8a098fb0ae076396ba"} Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.350148 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.359305 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerStarted","Data":"a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be"} Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.362252 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"61306868-aa06-4574-a568-b36b22fd6db6","Type":"ContainerStarted","Data":"537c85e6c54e2bb1768b64164ebd5a9c9403e16796d84252d7cce9d6047e6d4f"} Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.362577 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"61306868-aa06-4574-a568-b36b22fd6db6","Type":"ContainerStarted","Data":"12be6275f460c004869fbe90cd5e4bac3e3aef36d48b64cd2132eea8be42726c"} Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.363871 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.373422 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.9706115 podStartE2EDuration="2.373397778s" podCreationTimestamp="2026-01-21 14:54:14 +0000 UTC" firstStartedPulling="2026-01-21 14:54:15.439437065 +0000 UTC m=+1401.413786110" lastFinishedPulling="2026-01-21 14:54:15.842223343 +0000 UTC m=+1401.816572388" observedRunningTime="2026-01-21 14:54:16.371464968 +0000 UTC m=+1402.345814023" watchObservedRunningTime="2026-01-21 14:54:16.373397778 +0000 UTC m=+1402.347746823" Jan 21 14:54:16 crc kubenswrapper[4834]: I0121 14:54:16.409373 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.409334374 podStartE2EDuration="2.409334374s" podCreationTimestamp="2026-01-21 14:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:16.405454774 +0000 UTC m=+1402.379803819" watchObservedRunningTime="2026-01-21 14:54:16.409334374 +0000 UTC m=+1402.383683409" Jan 21 14:54:17 crc kubenswrapper[4834]: I0121 14:54:17.376333 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerStarted","Data":"780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72"} Jan 21 14:54:17 crc kubenswrapper[4834]: I0121 14:54:17.378503 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="proxy-httpd" containerID="cri-o://780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72" gracePeriod=30 Jan 21 14:54:17 crc kubenswrapper[4834]: I0121 14:54:17.378564 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="sg-core" containerID="cri-o://a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be" gracePeriod=30 Jan 21 14:54:17 crc kubenswrapper[4834]: I0121 14:54:17.378561 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="ceilometer-notification-agent" containerID="cri-o://1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0" gracePeriod=30 Jan 21 14:54:17 crc kubenswrapper[4834]: I0121 14:54:17.378525 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="ceilometer-central-agent" containerID="cri-o://0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76" gracePeriod=30 Jan 21 14:54:17 crc kubenswrapper[4834]: I0121 14:54:17.422184 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.829070836 podStartE2EDuration="5.422139546s" podCreationTimestamp="2026-01-21 14:54:12 +0000 UTC" firstStartedPulling="2026-01-21 14:54:13.46756946 +0000 UTC m=+1399.441918505" lastFinishedPulling="2026-01-21 14:54:17.06063817 +0000 UTC m=+1403.034987215" observedRunningTime="2026-01-21 14:54:17.406085457 +0000 UTC m=+1403.380434542" watchObservedRunningTime="2026-01-21 14:54:17.422139546 +0000 UTC m=+1403.396488621" Jan 21 14:54:17 crc kubenswrapper[4834]: I0121 14:54:17.738998 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:54:17 crc kubenswrapper[4834]: I0121 14:54:17.812012 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:54:17 crc kubenswrapper[4834]: I0121 14:54:17.991520 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brn6m"] Jan 21 14:54:18 crc kubenswrapper[4834]: I0121 14:54:18.391849 4834 generic.go:334] "Generic (PLEG): container finished" podID="6f5031ac-2f93-45be-941a-98044d9b832a" containerID="a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be" exitCode=2 Jan 21 14:54:18 crc kubenswrapper[4834]: I0121 14:54:18.393706 4834 generic.go:334] "Generic (PLEG): container finished" podID="6f5031ac-2f93-45be-941a-98044d9b832a" containerID="1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0" exitCode=0 Jan 21 14:54:18 crc kubenswrapper[4834]: I0121 14:54:18.391944 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerDied","Data":"a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be"} Jan 21 14:54:18 crc kubenswrapper[4834]: I0121 14:54:18.393913 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerDied","Data":"1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0"} Jan 21 14:54:19 crc kubenswrapper[4834]: I0121 14:54:19.403192 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-brn6m" podUID="cab31199-772f-450d-8008-26b3a39443ed" containerName="registry-server" containerID="cri-o://3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093" gracePeriod=2 Jan 21 14:54:19 crc kubenswrapper[4834]: I0121 14:54:19.879430 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:54:19 crc kubenswrapper[4834]: I0121 14:54:19.973213 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-utilities\") pod \"cab31199-772f-450d-8008-26b3a39443ed\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " Jan 21 14:54:19 crc kubenswrapper[4834]: I0121 14:54:19.973731 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-catalog-content\") pod \"cab31199-772f-450d-8008-26b3a39443ed\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " Jan 21 14:54:19 crc kubenswrapper[4834]: I0121 14:54:19.973809 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrjrq\" (UniqueName: \"kubernetes.io/projected/cab31199-772f-450d-8008-26b3a39443ed-kube-api-access-hrjrq\") pod \"cab31199-772f-450d-8008-26b3a39443ed\" (UID: \"cab31199-772f-450d-8008-26b3a39443ed\") " Jan 21 14:54:19 crc kubenswrapper[4834]: I0121 14:54:19.980443 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-utilities" (OuterVolumeSpecName: "utilities") pod "cab31199-772f-450d-8008-26b3a39443ed" (UID: "cab31199-772f-450d-8008-26b3a39443ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:19 crc kubenswrapper[4834]: I0121 14:54:19.982126 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab31199-772f-450d-8008-26b3a39443ed-kube-api-access-hrjrq" (OuterVolumeSpecName: "kube-api-access-hrjrq") pod "cab31199-772f-450d-8008-26b3a39443ed" (UID: "cab31199-772f-450d-8008-26b3a39443ed"). InnerVolumeSpecName "kube-api-access-hrjrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.077310 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.077379 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrjrq\" (UniqueName: \"kubernetes.io/projected/cab31199-772f-450d-8008-26b3a39443ed-kube-api-access-hrjrq\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.122878 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cab31199-772f-450d-8008-26b3a39443ed" (UID: "cab31199-772f-450d-8008-26b3a39443ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.179675 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab31199-772f-450d-8008-26b3a39443ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.415874 4834 generic.go:334] "Generic (PLEG): container finished" podID="cab31199-772f-450d-8008-26b3a39443ed" containerID="3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093" exitCode=0 Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.416435 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brn6m" event={"ID":"cab31199-772f-450d-8008-26b3a39443ed","Type":"ContainerDied","Data":"3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093"} Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.416497 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brn6m" event={"ID":"cab31199-772f-450d-8008-26b3a39443ed","Type":"ContainerDied","Data":"251bdf69667f447c8202c43b648e541d2f8f58ecf22d2d82dfd9119f8a66b8ec"} Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.416523 4834 scope.go:117] "RemoveContainer" containerID="3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.416741 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brn6m" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.454912 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brn6m"] Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.459795 4834 scope.go:117] "RemoveContainer" containerID="d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.467367 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-brn6m"] Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.503845 4834 scope.go:117] "RemoveContainer" containerID="3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.529555 4834 scope.go:117] "RemoveContainer" containerID="3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093" Jan 21 14:54:20 crc kubenswrapper[4834]: E0121 14:54:20.530756 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093\": container with ID starting with 3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093 not found: ID does not exist" containerID="3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.530805 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093"} err="failed to get container status \"3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093\": rpc error: code = NotFound desc = could not find container \"3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093\": container with ID starting with 3164a622cd98148d0d69e6b701d7888932f793f0e9060cea1cef6fd636eca093 not found: ID does not exist" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.530841 4834 scope.go:117] "RemoveContainer" containerID="d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b" Jan 21 14:54:20 crc kubenswrapper[4834]: E0121 14:54:20.531344 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b\": container with ID starting with d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b not found: ID does not exist" containerID="d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.531377 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b"} err="failed to get container status \"d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b\": rpc error: code = NotFound desc = could not find container \"d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b\": container with ID starting with d335964873dd741be7dbdb14582009bc8ed1a3772e9107b1c49ad307bc977d6b not found: ID does not exist" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.531399 4834 scope.go:117] "RemoveContainer" containerID="3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190" Jan 21 14:54:20 crc kubenswrapper[4834]: E0121 14:54:20.531859 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190\": container with ID starting with 3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190 not found: ID does not exist" containerID="3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190" Jan 21 14:54:20 crc kubenswrapper[4834]: I0121 14:54:20.531898 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190"} err="failed to get container status \"3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190\": rpc error: code = NotFound desc = could not find container \"3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190\": container with ID starting with 3881eb93bdec3bac4d45298d36aaf8c57bfffd0eb095537ccf353911bfb74190 not found: ID does not exist" Jan 21 14:54:22 crc kubenswrapper[4834]: I0121 14:54:22.336105 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab31199-772f-450d-8008-26b3a39443ed" path="/var/lib/kubelet/pods/cab31199-772f-450d-8008-26b3a39443ed/volumes" Jan 21 14:54:23 crc kubenswrapper[4834]: I0121 14:54:23.455386 4834 generic.go:334] "Generic (PLEG): container finished" podID="6f5031ac-2f93-45be-941a-98044d9b832a" containerID="0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76" exitCode=0 Jan 21 14:54:23 crc kubenswrapper[4834]: I0121 14:54:23.455483 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerDied","Data":"0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76"} Jan 21 14:54:24 crc kubenswrapper[4834]: I0121 14:54:24.803096 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 14:54:24 crc kubenswrapper[4834]: I0121 14:54:24.812495 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.304336 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8r9rw"] Jan 21 14:54:25 crc kubenswrapper[4834]: E0121 14:54:25.304913 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab31199-772f-450d-8008-26b3a39443ed" containerName="extract-utilities" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.304954 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab31199-772f-450d-8008-26b3a39443ed" containerName="extract-utilities" Jan 21 14:54:25 crc kubenswrapper[4834]: E0121 14:54:25.304981 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab31199-772f-450d-8008-26b3a39443ed" containerName="extract-content" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.304988 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab31199-772f-450d-8008-26b3a39443ed" containerName="extract-content" Jan 21 14:54:25 crc kubenswrapper[4834]: E0121 14:54:25.305011 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab31199-772f-450d-8008-26b3a39443ed" containerName="registry-server" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.305019 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab31199-772f-450d-8008-26b3a39443ed" containerName="registry-server" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.305219 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab31199-772f-450d-8008-26b3a39443ed" containerName="registry-server" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.305950 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.309728 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.310056 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.325590 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8r9rw"] Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.405746 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.405824 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-scripts\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.405861 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fg9p\" (UniqueName: \"kubernetes.io/projected/6a339279-6805-4bb2-9f82-c2549a8a695f-kube-api-access-4fg9p\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.406517 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-config-data\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.508555 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-config-data\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.508664 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.508702 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-scripts\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.508725 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fg9p\" (UniqueName: \"kubernetes.io/projected/6a339279-6805-4bb2-9f82-c2549a8a695f-kube-api-access-4fg9p\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.525443 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-scripts\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.526971 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-config-data\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.541562 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.548743 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fg9p\" (UniqueName: \"kubernetes.io/projected/6a339279-6805-4bb2-9f82-c2549a8a695f-kube-api-access-4fg9p\") pod \"nova-cell0-cell-mapping-8r9rw\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.586631 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.617117 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.619372 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.620134 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.636452 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.640988 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.646162 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.718631 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.747055 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m89cs\" (UniqueName: \"kubernetes.io/projected/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-kube-api-access-m89cs\") pod \"nova-scheduler-0\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.765371 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.765456 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bplz\" (UniqueName: \"kubernetes.io/projected/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-kube-api-access-6bplz\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.765518 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-config-data\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.765540 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-config-data\") pod \"nova-scheduler-0\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.765650 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-logs\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.765718 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.819910 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.897680 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m89cs\" (UniqueName: \"kubernetes.io/projected/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-kube-api-access-m89cs\") pod \"nova-scheduler-0\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.897909 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.897967 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bplz\" (UniqueName: \"kubernetes.io/projected/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-kube-api-access-6bplz\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.898028 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-config-data\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.898058 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-config-data\") pod \"nova-scheduler-0\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.898180 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-logs\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.898251 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.902497 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-logs\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.919633 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-config-data\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.928844 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.934138 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.937318 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bplz\" (UniqueName: \"kubernetes.io/projected/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-kube-api-access-6bplz\") pod \"nova-metadata-0\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.945564 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-config-data\") pod \"nova-scheduler-0\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.945673 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.947227 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.947830 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m89cs\" (UniqueName: \"kubernetes.io/projected/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-kube-api-access-m89cs\") pod \"nova-scheduler-0\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.966876 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.970101 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.978011 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-vjvxl"] Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.980268 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.984358 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:25 crc kubenswrapper[4834]: I0121 14:54:25.988072 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-vjvxl"] Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.013678 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.024294 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.024428 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hgl2\" (UniqueName: \"kubernetes.io/projected/fac02b09-9d5c-4da6-afc5-15216ebcddd6-kube-api-access-5hgl2\") pod \"nova-cell1-novncproxy-0\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.024491 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.028693 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.031727 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.046166 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.065384 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.135700 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136304 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-config\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136340 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136396 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqmfh\" (UniqueName: \"kubernetes.io/projected/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-kube-api-access-dqmfh\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136441 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136468 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136489 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6755\" (UniqueName: \"kubernetes.io/projected/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-kube-api-access-r6755\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136511 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136552 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hgl2\" (UniqueName: \"kubernetes.io/projected/fac02b09-9d5c-4da6-afc5-15216ebcddd6-kube-api-access-5hgl2\") pod \"nova-cell1-novncproxy-0\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136572 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-config-data\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136601 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136621 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-logs\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.136655 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.165193 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.165634 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.173666 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hgl2\" (UniqueName: \"kubernetes.io/projected/fac02b09-9d5c-4da6-afc5-15216ebcddd6-kube-api-access-5hgl2\") pod \"nova-cell1-novncproxy-0\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238261 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-logs\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238338 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238384 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238411 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-config\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238438 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238479 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqmfh\" (UniqueName: \"kubernetes.io/projected/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-kube-api-access-dqmfh\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238519 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238576 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6755\" (UniqueName: \"kubernetes.io/projected/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-kube-api-access-r6755\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238599 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.238635 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-config-data\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.247907 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.248848 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.249436 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.250741 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.256770 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-logs\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.265994 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.285526 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-config\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.287740 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-config-data\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.289654 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6755\" (UniqueName: \"kubernetes.io/projected/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-kube-api-access-r6755\") pod \"nova-api-0\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.296486 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqmfh\" (UniqueName: \"kubernetes.io/projected/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-kube-api-access-dqmfh\") pod \"dnsmasq-dns-647df7b8c5-vjvxl\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.322565 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.370978 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.465006 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:54:26 crc kubenswrapper[4834]: I0121 14:54:26.709627 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8r9rw"] Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.185011 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.203372 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.217314 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.235189 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jjlh"] Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.237016 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.242948 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.243214 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.261832 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jjlh"] Jan 21 14:54:27 crc kubenswrapper[4834]: W0121 14:54:27.311274 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda90468f0_bd39_4ba4_8e2f_78305f0a4f22.slice/crio-9222dccc5039697b99f7ff77d56490f35d06d457600130e0a0f8b417411ae993 WatchSource:0}: Error finding container 9222dccc5039697b99f7ff77d56490f35d06d457600130e0a0f8b417411ae993: Status 404 returned error can't find the container with id 9222dccc5039697b99f7ff77d56490f35d06d457600130e0a0f8b417411ae993 Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.313268 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-vjvxl"] Jan 21 14:54:27 crc kubenswrapper[4834]: W0121 14:54:27.316083 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode35c4344_5bc3_4f2d_bc53_8db65570bb0e.slice/crio-00acea43ddf6c4a741b368a0ad08ad24889cbf08edc3cad0494b3d366c0f58df WatchSource:0}: Error finding container 00acea43ddf6c4a741b368a0ad08ad24889cbf08edc3cad0494b3d366c0f58df: Status 404 returned error can't find the container with id 00acea43ddf6c4a741b368a0ad08ad24889cbf08edc3cad0494b3d366c0f58df Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.321497 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.398481 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.399150 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-config-data\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.399203 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnbtn\" (UniqueName: \"kubernetes.io/projected/30331d52-9c56-49cd-8b97-0869941cad41-kube-api-access-rnbtn\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.399290 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-scripts\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.500605 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.500909 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-config-data\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.501102 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnbtn\" (UniqueName: \"kubernetes.io/projected/30331d52-9c56-49cd-8b97-0869941cad41-kube-api-access-rnbtn\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.501284 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-scripts\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.508756 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-config-data\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.509562 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-scripts\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.511695 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.524950 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e35c4344-5bc3-4f2d-bc53-8db65570bb0e","Type":"ContainerStarted","Data":"00acea43ddf6c4a741b368a0ad08ad24889cbf08edc3cad0494b3d366c0f58df"} Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.528260 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" event={"ID":"a90468f0-bd39-4ba4-8e2f-78305f0a4f22","Type":"ContainerStarted","Data":"9222dccc5039697b99f7ff77d56490f35d06d457600130e0a0f8b417411ae993"} Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.531906 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2","Type":"ContainerStarted","Data":"24cb505f19e593fc7f03e890bda9b44b624bd3d0ea4d3fb8212dc6333521979f"} Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.532612 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnbtn\" (UniqueName: \"kubernetes.io/projected/30331d52-9c56-49cd-8b97-0869941cad41-kube-api-access-rnbtn\") pod \"nova-cell1-conductor-db-sync-9jjlh\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.537448 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fac02b09-9d5c-4da6-afc5-15216ebcddd6","Type":"ContainerStarted","Data":"39af6e999150af91308cccbd2d207f4b3e6e7e421c56e2c5b5eb23711b60903d"} Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.540624 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8r9rw" event={"ID":"6a339279-6805-4bb2-9f82-c2549a8a695f","Type":"ContainerStarted","Data":"d4fa94c6733544ea0cf3bc614fc78eb3be964784628806f3011d3ecf53e4fcf5"} Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.540726 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8r9rw" event={"ID":"6a339279-6805-4bb2-9f82-c2549a8a695f","Type":"ContainerStarted","Data":"f9e9bcbb8f02460eb9090d37ded1de2254b261a373f7e52514a43979586dcbfa"} Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.566707 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8r9rw" podStartSLOduration=2.566674462 podStartE2EDuration="2.566674462s" podCreationTimestamp="2026-01-21 14:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:27.564575257 +0000 UTC m=+1413.538924302" watchObservedRunningTime="2026-01-21 14:54:27.566674462 +0000 UTC m=+1413.541023507" Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.596459 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"048e5fdb-9f0c-4f83-868f-8114dd4e5c27","Type":"ContainerStarted","Data":"f8e9f6a471f6971d601b071bd000f15bf28e95811cee300256556639fef632c6"} Jan 21 14:54:27 crc kubenswrapper[4834]: I0121 14:54:27.602893 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:28 crc kubenswrapper[4834]: I0121 14:54:28.176886 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jjlh"] Jan 21 14:54:28 crc kubenswrapper[4834]: I0121 14:54:28.631604 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9jjlh" event={"ID":"30331d52-9c56-49cd-8b97-0869941cad41","Type":"ContainerStarted","Data":"47a76e82bb6ec47433c8cfb185300918cf777eb2d63a773f8288426f63642a13"} Jan 21 14:54:28 crc kubenswrapper[4834]: I0121 14:54:28.632082 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9jjlh" event={"ID":"30331d52-9c56-49cd-8b97-0869941cad41","Type":"ContainerStarted","Data":"a2480fe0f7c77b45d979e44bb1fb73635b38a7710887ba6d04c7f4466542b5f9"} Jan 21 14:54:28 crc kubenswrapper[4834]: I0121 14:54:28.637002 4834 generic.go:334] "Generic (PLEG): container finished" podID="a90468f0-bd39-4ba4-8e2f-78305f0a4f22" containerID="3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210" exitCode=0 Jan 21 14:54:28 crc kubenswrapper[4834]: I0121 14:54:28.637819 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" event={"ID":"a90468f0-bd39-4ba4-8e2f-78305f0a4f22","Type":"ContainerDied","Data":"3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210"} Jan 21 14:54:28 crc kubenswrapper[4834]: I0121 14:54:28.660739 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9jjlh" podStartSLOduration=1.660714296 podStartE2EDuration="1.660714296s" podCreationTimestamp="2026-01-21 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:28.655516305 +0000 UTC m=+1414.629865370" watchObservedRunningTime="2026-01-21 14:54:28.660714296 +0000 UTC m=+1414.635063341" Jan 21 14:54:29 crc kubenswrapper[4834]: I0121 14:54:29.958075 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:29 crc kubenswrapper[4834]: I0121 14:54:29.972808 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.685726 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fac02b09-9d5c-4da6-afc5-15216ebcddd6","Type":"ContainerStarted","Data":"f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa"} Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.685878 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fac02b09-9d5c-4da6-afc5-15216ebcddd6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa" gracePeriod=30 Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.690118 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"048e5fdb-9f0c-4f83-868f-8114dd4e5c27","Type":"ContainerStarted","Data":"d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856"} Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.690162 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"048e5fdb-9f0c-4f83-868f-8114dd4e5c27","Type":"ContainerStarted","Data":"879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911"} Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.690312 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerName="nova-metadata-log" containerID="cri-o://879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911" gracePeriod=30 Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.690439 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerName="nova-metadata-metadata" containerID="cri-o://d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856" gracePeriod=30 Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.697794 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" event={"ID":"a90468f0-bd39-4ba4-8e2f-78305f0a4f22","Type":"ContainerStarted","Data":"b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea"} Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.699221 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.703330 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2","Type":"ContainerStarted","Data":"d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c"} Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.719586 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.170421981 podStartE2EDuration="6.719565536s" podCreationTimestamp="2026-01-21 14:54:25 +0000 UTC" firstStartedPulling="2026-01-21 14:54:27.217881271 +0000 UTC m=+1413.192230326" lastFinishedPulling="2026-01-21 14:54:30.767024836 +0000 UTC m=+1416.741373881" observedRunningTime="2026-01-21 14:54:31.709351469 +0000 UTC m=+1417.683700524" watchObservedRunningTime="2026-01-21 14:54:31.719565536 +0000 UTC m=+1417.693914581" Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.737946 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.192757614 podStartE2EDuration="6.737895175s" podCreationTimestamp="2026-01-21 14:54:25 +0000 UTC" firstStartedPulling="2026-01-21 14:54:27.231192384 +0000 UTC m=+1413.205541429" lastFinishedPulling="2026-01-21 14:54:30.776329945 +0000 UTC m=+1416.750678990" observedRunningTime="2026-01-21 14:54:31.734758348 +0000 UTC m=+1417.709107393" watchObservedRunningTime="2026-01-21 14:54:31.737895175 +0000 UTC m=+1417.712244230" Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.769233 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" podStartSLOduration=6.769208588 podStartE2EDuration="6.769208588s" podCreationTimestamp="2026-01-21 14:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:31.761379225 +0000 UTC m=+1417.735728270" watchObservedRunningTime="2026-01-21 14:54:31.769208588 +0000 UTC m=+1417.743557633" Jan 21 14:54:31 crc kubenswrapper[4834]: I0121 14:54:31.785324 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.211843778 podStartE2EDuration="6.785290177s" podCreationTimestamp="2026-01-21 14:54:25 +0000 UTC" firstStartedPulling="2026-01-21 14:54:27.196200108 +0000 UTC m=+1413.170549153" lastFinishedPulling="2026-01-21 14:54:30.769646507 +0000 UTC m=+1416.743995552" observedRunningTime="2026-01-21 14:54:31.777429493 +0000 UTC m=+1417.751778538" watchObservedRunningTime="2026-01-21 14:54:31.785290177 +0000 UTC m=+1417.759639232" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.281427 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.476905 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-config-data\") pod \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.476972 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bplz\" (UniqueName: \"kubernetes.io/projected/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-kube-api-access-6bplz\") pod \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.477382 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-combined-ca-bundle\") pod \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.477541 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-logs\") pod \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\" (UID: \"048e5fdb-9f0c-4f83-868f-8114dd4e5c27\") " Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.477947 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-logs" (OuterVolumeSpecName: "logs") pod "048e5fdb-9f0c-4f83-868f-8114dd4e5c27" (UID: "048e5fdb-9f0c-4f83-868f-8114dd4e5c27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.478862 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.482467 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-kube-api-access-6bplz" (OuterVolumeSpecName: "kube-api-access-6bplz") pod "048e5fdb-9f0c-4f83-868f-8114dd4e5c27" (UID: "048e5fdb-9f0c-4f83-868f-8114dd4e5c27"). InnerVolumeSpecName "kube-api-access-6bplz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.504334 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-config-data" (OuterVolumeSpecName: "config-data") pod "048e5fdb-9f0c-4f83-868f-8114dd4e5c27" (UID: "048e5fdb-9f0c-4f83-868f-8114dd4e5c27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.505903 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "048e5fdb-9f0c-4f83-868f-8114dd4e5c27" (UID: "048e5fdb-9f0c-4f83-868f-8114dd4e5c27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.581489 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.581531 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bplz\" (UniqueName: \"kubernetes.io/projected/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-kube-api-access-6bplz\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.581543 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048e5fdb-9f0c-4f83-868f-8114dd4e5c27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.718749 4834 generic.go:334] "Generic (PLEG): container finished" podID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerID="d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856" exitCode=0 Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.718790 4834 generic.go:334] "Generic (PLEG): container finished" podID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerID="879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911" exitCode=143 Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.718889 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.718900 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"048e5fdb-9f0c-4f83-868f-8114dd4e5c27","Type":"ContainerDied","Data":"d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856"} Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.719447 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"048e5fdb-9f0c-4f83-868f-8114dd4e5c27","Type":"ContainerDied","Data":"879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911"} Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.719472 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"048e5fdb-9f0c-4f83-868f-8114dd4e5c27","Type":"ContainerDied","Data":"f8e9f6a471f6971d601b071bd000f15bf28e95811cee300256556639fef632c6"} Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.719499 4834 scope.go:117] "RemoveContainer" containerID="d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.724840 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e35c4344-5bc3-4f2d-bc53-8db65570bb0e","Type":"ContainerStarted","Data":"5a33eb4d6c14070fa7e23543af5bac25b98e385fda092e799fe2d91490a275d9"} Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.724868 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e35c4344-5bc3-4f2d-bc53-8db65570bb0e","Type":"ContainerStarted","Data":"5574e96bff2edf688694452f1f437bf89a5dbecff5781934bb9cd5dcc5f0f3c5"} Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.758336 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.075494663 podStartE2EDuration="7.758304133s" podCreationTimestamp="2026-01-21 14:54:25 +0000 UTC" firstStartedPulling="2026-01-21 14:54:27.321730266 +0000 UTC m=+1413.296079311" lastFinishedPulling="2026-01-21 14:54:32.004539736 +0000 UTC m=+1417.978888781" observedRunningTime="2026-01-21 14:54:32.746807906 +0000 UTC m=+1418.721156951" watchObservedRunningTime="2026-01-21 14:54:32.758304133 +0000 UTC m=+1418.732653178" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.760773 4834 scope.go:117] "RemoveContainer" containerID="879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.825905 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.840012 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.853207 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:32 crc kubenswrapper[4834]: E0121 14:54:32.853909 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerName="nova-metadata-log" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.853951 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerName="nova-metadata-log" Jan 21 14:54:32 crc kubenswrapper[4834]: E0121 14:54:32.853999 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerName="nova-metadata-metadata" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.854009 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerName="nova-metadata-metadata" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.854275 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerName="nova-metadata-metadata" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.854299 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" containerName="nova-metadata-log" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.855621 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.866072 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.867483 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.867900 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.868945 4834 scope.go:117] "RemoveContainer" containerID="d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856" Jan 21 14:54:32 crc kubenswrapper[4834]: E0121 14:54:32.869369 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856\": container with ID starting with d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856 not found: ID does not exist" containerID="d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.869396 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856"} err="failed to get container status \"d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856\": rpc error: code = NotFound desc = could not find container \"d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856\": container with ID starting with d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856 not found: ID does not exist" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.869421 4834 scope.go:117] "RemoveContainer" containerID="879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911" Jan 21 14:54:32 crc kubenswrapper[4834]: E0121 14:54:32.869651 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911\": container with ID starting with 879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911 not found: ID does not exist" containerID="879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.869679 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911"} err="failed to get container status \"879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911\": rpc error: code = NotFound desc = could not find container \"879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911\": container with ID starting with 879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911 not found: ID does not exist" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.869698 4834 scope.go:117] "RemoveContainer" containerID="d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.869963 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856"} err="failed to get container status \"d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856\": rpc error: code = NotFound desc = could not find container \"d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856\": container with ID starting with d5694f9f9329718ab361d6690be7078e8ff2198c57455b9dd487b6b6e3cb1856 not found: ID does not exist" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.869991 4834 scope.go:117] "RemoveContainer" containerID="879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.870443 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911"} err="failed to get container status \"879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911\": rpc error: code = NotFound desc = could not find container \"879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911\": container with ID starting with 879c67b4e90263033ebe3a3deaef33b4f56d5a2bb46c3e94f3454eaca2159911 not found: ID does not exist" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.893071 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ed970c-d677-4380-8c18-98d663d31f0f-logs\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.893149 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.893245 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.893293 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-config-data\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.893346 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h77h2\" (UniqueName: \"kubernetes.io/projected/85ed970c-d677-4380-8c18-98d663d31f0f-kube-api-access-h77h2\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.995445 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ed970c-d677-4380-8c18-98d663d31f0f-logs\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.995985 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ed970c-d677-4380-8c18-98d663d31f0f-logs\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.996011 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.996332 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.996405 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-config-data\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:32 crc kubenswrapper[4834]: I0121 14:54:32.996506 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h77h2\" (UniqueName: \"kubernetes.io/projected/85ed970c-d677-4380-8c18-98d663d31f0f-kube-api-access-h77h2\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:33 crc kubenswrapper[4834]: I0121 14:54:33.009885 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:33 crc kubenswrapper[4834]: I0121 14:54:33.012009 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-config-data\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:33 crc kubenswrapper[4834]: I0121 14:54:33.020883 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:33 crc kubenswrapper[4834]: I0121 14:54:33.025180 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h77h2\" (UniqueName: \"kubernetes.io/projected/85ed970c-d677-4380-8c18-98d663d31f0f-kube-api-access-h77h2\") pod \"nova-metadata-0\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " pod="openstack/nova-metadata-0" Jan 21 14:54:33 crc kubenswrapper[4834]: I0121 14:54:33.205666 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:33 crc kubenswrapper[4834]: I0121 14:54:33.710581 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:33 crc kubenswrapper[4834]: I0121 14:54:33.738551 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"85ed970c-d677-4380-8c18-98d663d31f0f","Type":"ContainerStarted","Data":"604ce3b8c7005f6ebd969b06fab6d6e53a8365e7bbe14f0de07050228148d8b5"} Jan 21 14:54:34 crc kubenswrapper[4834]: I0121 14:54:34.342479 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="048e5fdb-9f0c-4f83-868f-8114dd4e5c27" path="/var/lib/kubelet/pods/048e5fdb-9f0c-4f83-868f-8114dd4e5c27/volumes" Jan 21 14:54:34 crc kubenswrapper[4834]: I0121 14:54:34.756767 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"85ed970c-d677-4380-8c18-98d663d31f0f","Type":"ContainerStarted","Data":"6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214"} Jan 21 14:54:34 crc kubenswrapper[4834]: I0121 14:54:34.756854 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"85ed970c-d677-4380-8c18-98d663d31f0f","Type":"ContainerStarted","Data":"5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316"} Jan 21 14:54:34 crc kubenswrapper[4834]: I0121 14:54:34.778564 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.778541889 podStartE2EDuration="2.778541889s" podCreationTimestamp="2026-01-21 14:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:34.776859467 +0000 UTC m=+1420.751208512" watchObservedRunningTime="2026-01-21 14:54:34.778541889 +0000 UTC m=+1420.752890934" Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.019305 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.021732 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.062248 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.324251 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.373080 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.464599 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-5qxxk"] Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.465053 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" podUID="c644157e-579c-4348-bc28-fd5a273dfb02" containerName="dnsmasq-dns" containerID="cri-o://2010e57c123057da82a035341d5a54788d0f01f74d18493105f4fc85acbfdd92" gracePeriod=10 Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.467315 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.467367 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.820217 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" event={"ID":"c644157e-579c-4348-bc28-fd5a273dfb02","Type":"ContainerDied","Data":"2010e57c123057da82a035341d5a54788d0f01f74d18493105f4fc85acbfdd92"} Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.821968 4834 generic.go:334] "Generic (PLEG): container finished" podID="c644157e-579c-4348-bc28-fd5a273dfb02" containerID="2010e57c123057da82a035341d5a54788d0f01f74d18493105f4fc85acbfdd92" exitCode=0 Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.825485 4834 generic.go:334] "Generic (PLEG): container finished" podID="6a339279-6805-4bb2-9f82-c2549a8a695f" containerID="d4fa94c6733544ea0cf3bc614fc78eb3be964784628806f3011d3ecf53e4fcf5" exitCode=0 Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.826021 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8r9rw" event={"ID":"6a339279-6805-4bb2-9f82-c2549a8a695f","Type":"ContainerDied","Data":"d4fa94c6733544ea0cf3bc614fc78eb3be964784628806f3011d3ecf53e4fcf5"} Jan 21 14:54:36 crc kubenswrapper[4834]: I0121 14:54:36.884735 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.218643 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.358397 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-sb\") pod \"c644157e-579c-4348-bc28-fd5a273dfb02\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.358524 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-nb\") pod \"c644157e-579c-4348-bc28-fd5a273dfb02\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.358662 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-swift-storage-0\") pod \"c644157e-579c-4348-bc28-fd5a273dfb02\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.358750 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-svc\") pod \"c644157e-579c-4348-bc28-fd5a273dfb02\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.358852 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-config\") pod \"c644157e-579c-4348-bc28-fd5a273dfb02\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.358907 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj2jr\" (UniqueName: \"kubernetes.io/projected/c644157e-579c-4348-bc28-fd5a273dfb02-kube-api-access-cj2jr\") pod \"c644157e-579c-4348-bc28-fd5a273dfb02\" (UID: \"c644157e-579c-4348-bc28-fd5a273dfb02\") " Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.374135 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c644157e-579c-4348-bc28-fd5a273dfb02-kube-api-access-cj2jr" (OuterVolumeSpecName: "kube-api-access-cj2jr") pod "c644157e-579c-4348-bc28-fd5a273dfb02" (UID: "c644157e-579c-4348-bc28-fd5a273dfb02"). InnerVolumeSpecName "kube-api-access-cj2jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.436704 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c644157e-579c-4348-bc28-fd5a273dfb02" (UID: "c644157e-579c-4348-bc28-fd5a273dfb02"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.462732 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c644157e-579c-4348-bc28-fd5a273dfb02" (UID: "c644157e-579c-4348-bc28-fd5a273dfb02"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.462964 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.463003 4834 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.463015 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj2jr\" (UniqueName: \"kubernetes.io/projected/c644157e-579c-4348-bc28-fd5a273dfb02-kube-api-access-cj2jr\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.469559 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c644157e-579c-4348-bc28-fd5a273dfb02" (UID: "c644157e-579c-4348-bc28-fd5a273dfb02"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.469666 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c644157e-579c-4348-bc28-fd5a273dfb02" (UID: "c644157e-579c-4348-bc28-fd5a273dfb02"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.471812 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-config" (OuterVolumeSpecName: "config") pod "c644157e-579c-4348-bc28-fd5a273dfb02" (UID: "c644157e-579c-4348-bc28-fd5a273dfb02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.551212 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.551306 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.566188 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.566248 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.566264 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c644157e-579c-4348-bc28-fd5a273dfb02-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.845515 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" event={"ID":"c644157e-579c-4348-bc28-fd5a273dfb02","Type":"ContainerDied","Data":"b6b43574cc5d2868a3bb56e2c6778bff68ea0e86f068f673d7dbec0cb2615052"} Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.845652 4834 scope.go:117] "RemoveContainer" containerID="2010e57c123057da82a035341d5a54788d0f01f74d18493105f4fc85acbfdd92" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.845999 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-5qxxk" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.858196 4834 generic.go:334] "Generic (PLEG): container finished" podID="30331d52-9c56-49cd-8b97-0869941cad41" containerID="47a76e82bb6ec47433c8cfb185300918cf777eb2d63a773f8288426f63642a13" exitCode=0 Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.859476 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9jjlh" event={"ID":"30331d52-9c56-49cd-8b97-0869941cad41","Type":"ContainerDied","Data":"47a76e82bb6ec47433c8cfb185300918cf777eb2d63a773f8288426f63642a13"} Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.902711 4834 scope.go:117] "RemoveContainer" containerID="ef3d2bfc377dd1ff0e431c35ee333e5a95afbaa03c3d40056e31da206ecdd416" Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.942881 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-5qxxk"] Jan 21 14:54:37 crc kubenswrapper[4834]: I0121 14:54:37.954576 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-5qxxk"] Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.206038 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.209041 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.343052 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c644157e-579c-4348-bc28-fd5a273dfb02" path="/var/lib/kubelet/pods/c644157e-579c-4348-bc28-fd5a273dfb02/volumes" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.356294 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.487262 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-scripts\") pod \"6a339279-6805-4bb2-9f82-c2549a8a695f\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.487548 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-combined-ca-bundle\") pod \"6a339279-6805-4bb2-9f82-c2549a8a695f\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.487636 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fg9p\" (UniqueName: \"kubernetes.io/projected/6a339279-6805-4bb2-9f82-c2549a8a695f-kube-api-access-4fg9p\") pod \"6a339279-6805-4bb2-9f82-c2549a8a695f\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.487729 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-config-data\") pod \"6a339279-6805-4bb2-9f82-c2549a8a695f\" (UID: \"6a339279-6805-4bb2-9f82-c2549a8a695f\") " Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.507309 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-scripts" (OuterVolumeSpecName: "scripts") pod "6a339279-6805-4bb2-9f82-c2549a8a695f" (UID: "6a339279-6805-4bb2-9f82-c2549a8a695f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.507506 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a339279-6805-4bb2-9f82-c2549a8a695f-kube-api-access-4fg9p" (OuterVolumeSpecName: "kube-api-access-4fg9p") pod "6a339279-6805-4bb2-9f82-c2549a8a695f" (UID: "6a339279-6805-4bb2-9f82-c2549a8a695f"). InnerVolumeSpecName "kube-api-access-4fg9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.527143 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a339279-6805-4bb2-9f82-c2549a8a695f" (UID: "6a339279-6805-4bb2-9f82-c2549a8a695f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.538847 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-config-data" (OuterVolumeSpecName: "config-data") pod "6a339279-6805-4bb2-9f82-c2549a8a695f" (UID: "6a339279-6805-4bb2-9f82-c2549a8a695f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.595088 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.595138 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.595151 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a339279-6805-4bb2-9f82-c2549a8a695f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.595166 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fg9p\" (UniqueName: \"kubernetes.io/projected/6a339279-6805-4bb2-9f82-c2549a8a695f-kube-api-access-4fg9p\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.873394 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8r9rw" event={"ID":"6a339279-6805-4bb2-9f82-c2549a8a695f","Type":"ContainerDied","Data":"f9e9bcbb8f02460eb9090d37ded1de2254b261a373f7e52514a43979586dcbfa"} Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.873964 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e9bcbb8f02460eb9090d37ded1de2254b261a373f7e52514a43979586dcbfa" Jan 21 14:54:38 crc kubenswrapper[4834]: I0121 14:54:38.873503 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8r9rw" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.149069 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.149458 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-log" containerID="cri-o://5574e96bff2edf688694452f1f437bf89a5dbecff5781934bb9cd5dcc5f0f3c5" gracePeriod=30 Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.150125 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-api" containerID="cri-o://5a33eb4d6c14070fa7e23543af5bac25b98e385fda092e799fe2d91490a275d9" gracePeriod=30 Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.198220 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.198615 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" containerName="nova-scheduler-scheduler" containerID="cri-o://d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c" gracePeriod=30 Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.218886 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.354303 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.518730 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-config-data\") pod \"30331d52-9c56-49cd-8b97-0869941cad41\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.519622 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnbtn\" (UniqueName: \"kubernetes.io/projected/30331d52-9c56-49cd-8b97-0869941cad41-kube-api-access-rnbtn\") pod \"30331d52-9c56-49cd-8b97-0869941cad41\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.519860 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-scripts\") pod \"30331d52-9c56-49cd-8b97-0869941cad41\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.523673 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-combined-ca-bundle\") pod \"30331d52-9c56-49cd-8b97-0869941cad41\" (UID: \"30331d52-9c56-49cd-8b97-0869941cad41\") " Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.531286 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-scripts" (OuterVolumeSpecName: "scripts") pod "30331d52-9c56-49cd-8b97-0869941cad41" (UID: "30331d52-9c56-49cd-8b97-0869941cad41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.546675 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30331d52-9c56-49cd-8b97-0869941cad41-kube-api-access-rnbtn" (OuterVolumeSpecName: "kube-api-access-rnbtn") pod "30331d52-9c56-49cd-8b97-0869941cad41" (UID: "30331d52-9c56-49cd-8b97-0869941cad41"). InnerVolumeSpecName "kube-api-access-rnbtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.561191 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-config-data" (OuterVolumeSpecName: "config-data") pod "30331d52-9c56-49cd-8b97-0869941cad41" (UID: "30331d52-9c56-49cd-8b97-0869941cad41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.569625 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30331d52-9c56-49cd-8b97-0869941cad41" (UID: "30331d52-9c56-49cd-8b97-0869941cad41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.628458 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.628503 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.628515 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30331d52-9c56-49cd-8b97-0869941cad41-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.628527 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnbtn\" (UniqueName: \"kubernetes.io/projected/30331d52-9c56-49cd-8b97-0869941cad41-kube-api-access-rnbtn\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.894845 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9jjlh" event={"ID":"30331d52-9c56-49cd-8b97-0869941cad41","Type":"ContainerDied","Data":"a2480fe0f7c77b45d979e44bb1fb73635b38a7710887ba6d04c7f4466542b5f9"} Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.894876 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9jjlh" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.894909 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2480fe0f7c77b45d979e44bb1fb73635b38a7710887ba6d04c7f4466542b5f9" Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.902187 4834 generic.go:334] "Generic (PLEG): container finished" podID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerID="5574e96bff2edf688694452f1f437bf89a5dbecff5781934bb9cd5dcc5f0f3c5" exitCode=143 Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.902472 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="85ed970c-d677-4380-8c18-98d663d31f0f" containerName="nova-metadata-log" containerID="cri-o://5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316" gracePeriod=30 Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.902784 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e35c4344-5bc3-4f2d-bc53-8db65570bb0e","Type":"ContainerDied","Data":"5574e96bff2edf688694452f1f437bf89a5dbecff5781934bb9cd5dcc5f0f3c5"} Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.903190 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="85ed970c-d677-4380-8c18-98d663d31f0f" containerName="nova-metadata-metadata" containerID="cri-o://6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214" gracePeriod=30 Jan 21 14:54:39 crc kubenswrapper[4834]: I0121 14:54:39.999817 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:54:40 crc kubenswrapper[4834]: E0121 14:54:40.000554 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a339279-6805-4bb2-9f82-c2549a8a695f" containerName="nova-manage" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.000581 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a339279-6805-4bb2-9f82-c2549a8a695f" containerName="nova-manage" Jan 21 14:54:40 crc kubenswrapper[4834]: E0121 14:54:40.000594 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c644157e-579c-4348-bc28-fd5a273dfb02" containerName="dnsmasq-dns" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.000607 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c644157e-579c-4348-bc28-fd5a273dfb02" containerName="dnsmasq-dns" Jan 21 14:54:40 crc kubenswrapper[4834]: E0121 14:54:40.000625 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30331d52-9c56-49cd-8b97-0869941cad41" containerName="nova-cell1-conductor-db-sync" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.000636 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="30331d52-9c56-49cd-8b97-0869941cad41" containerName="nova-cell1-conductor-db-sync" Jan 21 14:54:40 crc kubenswrapper[4834]: E0121 14:54:40.000664 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c644157e-579c-4348-bc28-fd5a273dfb02" containerName="init" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.000672 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c644157e-579c-4348-bc28-fd5a273dfb02" containerName="init" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.000953 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="30331d52-9c56-49cd-8b97-0869941cad41" containerName="nova-cell1-conductor-db-sync" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.000973 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c644157e-579c-4348-bc28-fd5a273dfb02" containerName="dnsmasq-dns" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.000994 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a339279-6805-4bb2-9f82-c2549a8a695f" containerName="nova-manage" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.002078 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.014384 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.021131 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.040916 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.041056 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.041096 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzw6\" (UniqueName: \"kubernetes.io/projected/07ff4f13-b754-4f82-accc-54ed420dce2e-kube-api-access-4bzw6\") pod \"nova-cell1-conductor-0\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.143466 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.143578 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.143612 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzw6\" (UniqueName: \"kubernetes.io/projected/07ff4f13-b754-4f82-accc-54ed420dce2e-kube-api-access-4bzw6\") pod \"nova-cell1-conductor-0\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.151208 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.152225 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.167855 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzw6\" (UniqueName: \"kubernetes.io/projected/07ff4f13-b754-4f82-accc-54ed420dce2e-kube-api-access-4bzw6\") pod \"nova-cell1-conductor-0\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.382265 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.467427 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.561914 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h77h2\" (UniqueName: \"kubernetes.io/projected/85ed970c-d677-4380-8c18-98d663d31f0f-kube-api-access-h77h2\") pod \"85ed970c-d677-4380-8c18-98d663d31f0f\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.562084 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ed970c-d677-4380-8c18-98d663d31f0f-logs\") pod \"85ed970c-d677-4380-8c18-98d663d31f0f\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.562180 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-combined-ca-bundle\") pod \"85ed970c-d677-4380-8c18-98d663d31f0f\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.562806 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85ed970c-d677-4380-8c18-98d663d31f0f-logs" (OuterVolumeSpecName: "logs") pod "85ed970c-d677-4380-8c18-98d663d31f0f" (UID: "85ed970c-d677-4380-8c18-98d663d31f0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.563109 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-nova-metadata-tls-certs\") pod \"85ed970c-d677-4380-8c18-98d663d31f0f\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.563358 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-config-data\") pod \"85ed970c-d677-4380-8c18-98d663d31f0f\" (UID: \"85ed970c-d677-4380-8c18-98d663d31f0f\") " Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.564442 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85ed970c-d677-4380-8c18-98d663d31f0f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.572348 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ed970c-d677-4380-8c18-98d663d31f0f-kube-api-access-h77h2" (OuterVolumeSpecName: "kube-api-access-h77h2") pod "85ed970c-d677-4380-8c18-98d663d31f0f" (UID: "85ed970c-d677-4380-8c18-98d663d31f0f"). InnerVolumeSpecName "kube-api-access-h77h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.599170 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85ed970c-d677-4380-8c18-98d663d31f0f" (UID: "85ed970c-d677-4380-8c18-98d663d31f0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.610513 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-config-data" (OuterVolumeSpecName: "config-data") pod "85ed970c-d677-4380-8c18-98d663d31f0f" (UID: "85ed970c-d677-4380-8c18-98d663d31f0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.650846 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "85ed970c-d677-4380-8c18-98d663d31f0f" (UID: "85ed970c-d677-4380-8c18-98d663d31f0f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.666905 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h77h2\" (UniqueName: \"kubernetes.io/projected/85ed970c-d677-4380-8c18-98d663d31f0f-kube-api-access-h77h2\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.666967 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.666978 4834 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.666998 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85ed970c-d677-4380-8c18-98d663d31f0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.904646 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.925779 4834 generic.go:334] "Generic (PLEG): container finished" podID="85ed970c-d677-4380-8c18-98d663d31f0f" containerID="6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214" exitCode=0 Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.925819 4834 generic.go:334] "Generic (PLEG): container finished" podID="85ed970c-d677-4380-8c18-98d663d31f0f" containerID="5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316" exitCode=143 Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.925844 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"85ed970c-d677-4380-8c18-98d663d31f0f","Type":"ContainerDied","Data":"6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214"} Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.925937 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"85ed970c-d677-4380-8c18-98d663d31f0f","Type":"ContainerDied","Data":"5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316"} Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.925954 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"85ed970c-d677-4380-8c18-98d663d31f0f","Type":"ContainerDied","Data":"604ce3b8c7005f6ebd969b06fab6d6e53a8365e7bbe14f0de07050228148d8b5"} Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.925955 4834 scope.go:117] "RemoveContainer" containerID="6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214" Jan 21 14:54:40 crc kubenswrapper[4834]: I0121 14:54:40.926420 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: E0121 14:54:41.020320 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:54:41 crc kubenswrapper[4834]: E0121 14:54:41.022206 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:54:41 crc kubenswrapper[4834]: E0121 14:54:41.023885 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:54:41 crc kubenswrapper[4834]: E0121 14:54:41.024026 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" containerName="nova-scheduler-scheduler" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.071134 4834 scope.go:117] "RemoveContainer" containerID="5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.082712 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.097489 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.111511 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:41 crc kubenswrapper[4834]: E0121 14:54:41.112275 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ed970c-d677-4380-8c18-98d663d31f0f" containerName="nova-metadata-log" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.112307 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ed970c-d677-4380-8c18-98d663d31f0f" containerName="nova-metadata-log" Jan 21 14:54:41 crc kubenswrapper[4834]: E0121 14:54:41.112348 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ed970c-d677-4380-8c18-98d663d31f0f" containerName="nova-metadata-metadata" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.112358 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ed970c-d677-4380-8c18-98d663d31f0f" containerName="nova-metadata-metadata" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.112594 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ed970c-d677-4380-8c18-98d663d31f0f" containerName="nova-metadata-log" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.112618 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ed970c-d677-4380-8c18-98d663d31f0f" containerName="nova-metadata-metadata" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.115761 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.121520 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.124054 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.136347 4834 scope.go:117] "RemoveContainer" containerID="6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214" Jan 21 14:54:41 crc kubenswrapper[4834]: E0121 14:54:41.137022 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214\": container with ID starting with 6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214 not found: ID does not exist" containerID="6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.137064 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214"} err="failed to get container status \"6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214\": rpc error: code = NotFound desc = could not find container \"6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214\": container with ID starting with 6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214 not found: ID does not exist" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.137097 4834 scope.go:117] "RemoveContainer" containerID="5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316" Jan 21 14:54:41 crc kubenswrapper[4834]: E0121 14:54:41.139410 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316\": container with ID starting with 5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316 not found: ID does not exist" containerID="5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.139497 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316"} err="failed to get container status \"5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316\": rpc error: code = NotFound desc = could not find container \"5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316\": container with ID starting with 5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316 not found: ID does not exist" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.139543 4834 scope.go:117] "RemoveContainer" containerID="6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.142002 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.143708 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214"} err="failed to get container status \"6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214\": rpc error: code = NotFound desc = could not find container \"6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214\": container with ID starting with 6f5cc918948e0e48bcd6797e5b06dbb01aa1fdf69611f07d75fe0ecd9f2c4214 not found: ID does not exist" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.143784 4834 scope.go:117] "RemoveContainer" containerID="5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.145306 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316"} err="failed to get container status \"5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316\": rpc error: code = NotFound desc = could not find container \"5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316\": container with ID starting with 5ff6d2c3246632c4c13cfe49cbf62fb1330019058def7b68bbebacdf3c230316 not found: ID does not exist" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.179118 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.179708 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9qpr\" (UniqueName: \"kubernetes.io/projected/7be56b32-2bc3-4e0a-9098-ed19bc90187d-kube-api-access-c9qpr\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.180055 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be56b32-2bc3-4e0a-9098-ed19bc90187d-logs\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.180124 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-config-data\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.180284 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.283997 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be56b32-2bc3-4e0a-9098-ed19bc90187d-logs\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.284153 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-config-data\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.284207 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.284343 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.284429 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9qpr\" (UniqueName: \"kubernetes.io/projected/7be56b32-2bc3-4e0a-9098-ed19bc90187d-kube-api-access-c9qpr\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.284707 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be56b32-2bc3-4e0a-9098-ed19bc90187d-logs\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.289476 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.289565 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.289873 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-config-data\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.307948 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9qpr\" (UniqueName: \"kubernetes.io/projected/7be56b32-2bc3-4e0a-9098-ed19bc90187d-kube-api-access-c9qpr\") pod \"nova-metadata-0\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.445457 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.914483 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.953270 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7be56b32-2bc3-4e0a-9098-ed19bc90187d","Type":"ContainerStarted","Data":"c8b5185e3bc8f0d42c1dbc320c5b8375560ac15b34732f442797e32ce287fb93"} Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.955247 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"07ff4f13-b754-4f82-accc-54ed420dce2e","Type":"ContainerStarted","Data":"9cc4e607542ea15206bb43978347800db816a791a94de96842f843bffbc0cd73"} Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.955303 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"07ff4f13-b754-4f82-accc-54ed420dce2e","Type":"ContainerStarted","Data":"8d8a63311e10d405f56c84fccafd5297ec905bcba33f374cf845ccc3f75e5d45"} Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.957112 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:41 crc kubenswrapper[4834]: I0121 14:54:41.989960 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.989905265 podStartE2EDuration="2.989905265s" podCreationTimestamp="2026-01-21 14:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:41.980441581 +0000 UTC m=+1427.954790626" watchObservedRunningTime="2026-01-21 14:54:41.989905265 +0000 UTC m=+1427.964254310" Jan 21 14:54:42 crc kubenswrapper[4834]: I0121 14:54:42.338685 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ed970c-d677-4380-8c18-98d663d31f0f" path="/var/lib/kubelet/pods/85ed970c-d677-4380-8c18-98d663d31f0f/volumes" Jan 21 14:54:42 crc kubenswrapper[4834]: I0121 14:54:42.967425 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:54:42 crc kubenswrapper[4834]: I0121 14:54:42.971212 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 14:54:42 crc kubenswrapper[4834]: I0121 14:54:42.975876 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7be56b32-2bc3-4e0a-9098-ed19bc90187d","Type":"ContainerStarted","Data":"84a0f1eceea518b2b97035502f7b884cb1d7a20924dffc4b15347ba3c4296bbf"} Jan 21 14:54:42 crc kubenswrapper[4834]: I0121 14:54:42.975976 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7be56b32-2bc3-4e0a-9098-ed19bc90187d","Type":"ContainerStarted","Data":"8548e9c3f056e8fa0ba7a51bbdd28745fa945e40cc8f2673cf28cb7d9c2a7e43"} Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.014009 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.013972127 podStartE2EDuration="2.013972127s" podCreationTimestamp="2026-01-21 14:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:43.00316986 +0000 UTC m=+1428.977518915" watchObservedRunningTime="2026-01-21 14:54:43.013972127 +0000 UTC m=+1428.988321172" Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.792038 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.882159 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m89cs\" (UniqueName: \"kubernetes.io/projected/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-kube-api-access-m89cs\") pod \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.882279 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-config-data\") pod \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.882543 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-combined-ca-bundle\") pod \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\" (UID: \"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2\") " Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.892814 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-kube-api-access-m89cs" (OuterVolumeSpecName: "kube-api-access-m89cs") pod "7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" (UID: "7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2"). InnerVolumeSpecName "kube-api-access-m89cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.938443 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-config-data" (OuterVolumeSpecName: "config-data") pod "7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" (UID: "7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.942072 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" (UID: "7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.985982 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m89cs\" (UniqueName: \"kubernetes.io/projected/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-kube-api-access-m89cs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.986353 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:43 crc kubenswrapper[4834]: I0121 14:54:43.986365 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.024371 4834 generic.go:334] "Generic (PLEG): container finished" podID="7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" containerID="d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c" exitCode=0 Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.024494 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.024517 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2","Type":"ContainerDied","Data":"d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c"} Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.024562 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2","Type":"ContainerDied","Data":"24cb505f19e593fc7f03e890bda9b44b624bd3d0ea4d3fb8212dc6333521979f"} Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.024641 4834 scope.go:117] "RemoveContainer" containerID="d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.033588 4834 generic.go:334] "Generic (PLEG): container finished" podID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerID="5a33eb4d6c14070fa7e23543af5bac25b98e385fda092e799fe2d91490a275d9" exitCode=0 Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.033656 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e35c4344-5bc3-4f2d-bc53-8db65570bb0e","Type":"ContainerDied","Data":"5a33eb4d6c14070fa7e23543af5bac25b98e385fda092e799fe2d91490a275d9"} Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.082793 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.087161 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.090463 4834 scope.go:117] "RemoveContainer" containerID="d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c" Jan 21 14:54:44 crc kubenswrapper[4834]: E0121 14:54:44.092563 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c\": container with ID starting with d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c not found: ID does not exist" containerID="d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.092603 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c"} err="failed to get container status \"d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c\": rpc error: code = NotFound desc = could not find container \"d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c\": container with ID starting with d24ac3bfedc0a95491a88e0200f05ef0d35274828ca0287bf905fed95c3c233c not found: ID does not exist" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.097515 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.177511 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:54:44 crc kubenswrapper[4834]: E0121 14:54:44.178246 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-api" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.178276 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-api" Jan 21 14:54:44 crc kubenswrapper[4834]: E0121 14:54:44.178303 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" containerName="nova-scheduler-scheduler" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.178313 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" containerName="nova-scheduler-scheduler" Jan 21 14:54:44 crc kubenswrapper[4834]: E0121 14:54:44.178330 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-log" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.178390 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-log" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.178697 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-api" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.178726 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" containerName="nova-api-log" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.178736 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" containerName="nova-scheduler-scheduler" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.179855 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.183489 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.193234 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-combined-ca-bundle\") pod \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.193428 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6755\" (UniqueName: \"kubernetes.io/projected/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-kube-api-access-r6755\") pod \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.193564 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-logs\") pod \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.196375 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-config-data\") pod \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\" (UID: \"e35c4344-5bc3-4f2d-bc53-8db65570bb0e\") " Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.196582 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-logs" (OuterVolumeSpecName: "logs") pod "e35c4344-5bc3-4f2d-bc53-8db65570bb0e" (UID: "e35c4344-5bc3-4f2d-bc53-8db65570bb0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.199600 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.202530 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.206999 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-kube-api-access-r6755" (OuterVolumeSpecName: "kube-api-access-r6755") pod "e35c4344-5bc3-4f2d-bc53-8db65570bb0e" (UID: "e35c4344-5bc3-4f2d-bc53-8db65570bb0e"). InnerVolumeSpecName "kube-api-access-r6755". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.229871 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e35c4344-5bc3-4f2d-bc53-8db65570bb0e" (UID: "e35c4344-5bc3-4f2d-bc53-8db65570bb0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.233183 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-config-data" (OuterVolumeSpecName: "config-data") pod "e35c4344-5bc3-4f2d-bc53-8db65570bb0e" (UID: "e35c4344-5bc3-4f2d-bc53-8db65570bb0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.302393 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-config-data\") pod \"nova-scheduler-0\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.302497 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.302544 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5knfq\" (UniqueName: \"kubernetes.io/projected/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-kube-api-access-5knfq\") pod \"nova-scheduler-0\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.302718 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6755\" (UniqueName: \"kubernetes.io/projected/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-kube-api-access-r6755\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.302730 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.302740 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e35c4344-5bc3-4f2d-bc53-8db65570bb0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.341277 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2" path="/var/lib/kubelet/pods/7cfcb7ee-00e3-4083-b367-9d6f2ea2dfd2/volumes" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.405773 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.405953 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5knfq\" (UniqueName: \"kubernetes.io/projected/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-kube-api-access-5knfq\") pod \"nova-scheduler-0\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.406283 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-config-data\") pod \"nova-scheduler-0\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.413783 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.415710 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-config-data\") pod \"nova-scheduler-0\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.426542 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5knfq\" (UniqueName: \"kubernetes.io/projected/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-kube-api-access-5knfq\") pod \"nova-scheduler-0\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.511373 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:54:44 crc kubenswrapper[4834]: I0121 14:54:44.833433 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:54:44 crc kubenswrapper[4834]: W0121 14:54:44.837568 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93d641d5_b37a_49c7_9c13_0f9edfc6fe3e.slice/crio-2f118b56f69baad63a17e8e4c483235717c9765ca05a36320ee43f6800307c49 WatchSource:0}: Error finding container 2f118b56f69baad63a17e8e4c483235717c9765ca05a36320ee43f6800307c49: Status 404 returned error can't find the container with id 2f118b56f69baad63a17e8e4c483235717c9765ca05a36320ee43f6800307c49 Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.051088 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e35c4344-5bc3-4f2d-bc53-8db65570bb0e","Type":"ContainerDied","Data":"00acea43ddf6c4a741b368a0ad08ad24889cbf08edc3cad0494b3d366c0f58df"} Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.051162 4834 scope.go:117] "RemoveContainer" containerID="5a33eb4d6c14070fa7e23543af5bac25b98e385fda092e799fe2d91490a275d9" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.051339 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.057485 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e","Type":"ContainerStarted","Data":"2f118b56f69baad63a17e8e4c483235717c9765ca05a36320ee43f6800307c49"} Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.087384 4834 scope.go:117] "RemoveContainer" containerID="5574e96bff2edf688694452f1f437bf89a5dbecff5781934bb9cd5dcc5f0f3c5" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.093898 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.129686 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.160214 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.164597 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.167999 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.184271 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.244583 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.244671 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkd2q\" (UniqueName: \"kubernetes.io/projected/e6e570e1-c508-4ce9-955a-309724698661-kube-api-access-tkd2q\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.244702 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-config-data\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.244742 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e570e1-c508-4ce9-955a-309724698661-logs\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.347152 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.348041 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkd2q\" (UniqueName: \"kubernetes.io/projected/e6e570e1-c508-4ce9-955a-309724698661-kube-api-access-tkd2q\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.348083 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-config-data\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.348141 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e570e1-c508-4ce9-955a-309724698661-logs\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.349069 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e570e1-c508-4ce9-955a-309724698661-logs\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.354027 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.354033 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-config-data\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.369823 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkd2q\" (UniqueName: \"kubernetes.io/projected/e6e570e1-c508-4ce9-955a-309724698661-kube-api-access-tkd2q\") pod \"nova-api-0\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.488860 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:54:45 crc kubenswrapper[4834]: I0121 14:54:45.963129 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:54:46 crc kubenswrapper[4834]: I0121 14:54:46.082703 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e570e1-c508-4ce9-955a-309724698661","Type":"ContainerStarted","Data":"8306d7b2efd620a5c2f65aeca8e0ef96d20e5d12be67fa8038ad21369ee58664"} Jan 21 14:54:46 crc kubenswrapper[4834]: I0121 14:54:46.087509 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e","Type":"ContainerStarted","Data":"d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c"} Jan 21 14:54:46 crc kubenswrapper[4834]: I0121 14:54:46.123510 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.123480302 podStartE2EDuration="2.123480302s" podCreationTimestamp="2026-01-21 14:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:46.108240897 +0000 UTC m=+1432.082589952" watchObservedRunningTime="2026-01-21 14:54:46.123480302 +0000 UTC m=+1432.097829347" Jan 21 14:54:46 crc kubenswrapper[4834]: I0121 14:54:46.339919 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35c4344-5bc3-4f2d-bc53-8db65570bb0e" path="/var/lib/kubelet/pods/e35c4344-5bc3-4f2d-bc53-8db65570bb0e/volumes" Jan 21 14:54:46 crc kubenswrapper[4834]: I0121 14:54:46.445632 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:54:46 crc kubenswrapper[4834]: I0121 14:54:46.446054 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:54:47 crc kubenswrapper[4834]: I0121 14:54:47.106164 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e570e1-c508-4ce9-955a-309724698661","Type":"ContainerStarted","Data":"ce0e409f77afad20522b2d5c5697982357ff82fc8d2ba9ccaea1b91f271aeb6d"} Jan 21 14:54:47 crc kubenswrapper[4834]: I0121 14:54:47.106248 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e570e1-c508-4ce9-955a-309724698661","Type":"ContainerStarted","Data":"34385e391fa72b3c6ed74863fb03ed5794c3bb5a50fec4540af17052fcbe131b"} Jan 21 14:54:47 crc kubenswrapper[4834]: I0121 14:54:47.133830 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.133802245 podStartE2EDuration="2.133802245s" podCreationTimestamp="2026-01-21 14:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:54:47.126619812 +0000 UTC m=+1433.100968857" watchObservedRunningTime="2026-01-21 14:54:47.133802245 +0000 UTC m=+1433.108151290" Jan 21 14:54:47 crc kubenswrapper[4834]: I0121 14:54:47.863138 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.031709 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-log-httpd\") pod \"6f5031ac-2f93-45be-941a-98044d9b832a\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.032529 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-scripts\") pod \"6f5031ac-2f93-45be-941a-98044d9b832a\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.032564 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4ljh\" (UniqueName: \"kubernetes.io/projected/6f5031ac-2f93-45be-941a-98044d9b832a-kube-api-access-g4ljh\") pod \"6f5031ac-2f93-45be-941a-98044d9b832a\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.032668 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6f5031ac-2f93-45be-941a-98044d9b832a" (UID: "6f5031ac-2f93-45be-941a-98044d9b832a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.032747 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-combined-ca-bundle\") pod \"6f5031ac-2f93-45be-941a-98044d9b832a\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.032837 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-config-data\") pod \"6f5031ac-2f93-45be-941a-98044d9b832a\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.032857 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-run-httpd\") pod \"6f5031ac-2f93-45be-941a-98044d9b832a\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.032882 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-sg-core-conf-yaml\") pod \"6f5031ac-2f93-45be-941a-98044d9b832a\" (UID: \"6f5031ac-2f93-45be-941a-98044d9b832a\") " Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.033751 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.034201 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6f5031ac-2f93-45be-941a-98044d9b832a" (UID: "6f5031ac-2f93-45be-941a-98044d9b832a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.040777 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5031ac-2f93-45be-941a-98044d9b832a-kube-api-access-g4ljh" (OuterVolumeSpecName: "kube-api-access-g4ljh") pod "6f5031ac-2f93-45be-941a-98044d9b832a" (UID: "6f5031ac-2f93-45be-941a-98044d9b832a"). InnerVolumeSpecName "kube-api-access-g4ljh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.040820 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-scripts" (OuterVolumeSpecName: "scripts") pod "6f5031ac-2f93-45be-941a-98044d9b832a" (UID: "6f5031ac-2f93-45be-941a-98044d9b832a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.080970 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6f5031ac-2f93-45be-941a-98044d9b832a" (UID: "6f5031ac-2f93-45be-941a-98044d9b832a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.122622 4834 generic.go:334] "Generic (PLEG): container finished" podID="6f5031ac-2f93-45be-941a-98044d9b832a" containerID="780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72" exitCode=137 Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.122730 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerDied","Data":"780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72"} Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.122811 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6f5031ac-2f93-45be-941a-98044d9b832a","Type":"ContainerDied","Data":"3aca24d077974c60162292f12b5b1bfdce2cab68e633c56999c4b78c5356646c"} Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.122841 4834 scope.go:117] "RemoveContainer" containerID="780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.122750 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.130561 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f5031ac-2f93-45be-941a-98044d9b832a" (UID: "6f5031ac-2f93-45be-941a-98044d9b832a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.136815 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.136865 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4ljh\" (UniqueName: \"kubernetes.io/projected/6f5031ac-2f93-45be-941a-98044d9b832a-kube-api-access-g4ljh\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.136883 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.136896 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6f5031ac-2f93-45be-941a-98044d9b832a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.136908 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.168889 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-config-data" (OuterVolumeSpecName: "config-data") pod "6f5031ac-2f93-45be-941a-98044d9b832a" (UID: "6f5031ac-2f93-45be-941a-98044d9b832a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.216600 4834 scope.go:117] "RemoveContainer" containerID="a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.239830 4834 scope.go:117] "RemoveContainer" containerID="1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.240260 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5031ac-2f93-45be-941a-98044d9b832a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.275207 4834 scope.go:117] "RemoveContainer" containerID="0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.305338 4834 scope.go:117] "RemoveContainer" containerID="780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72" Jan 21 14:54:48 crc kubenswrapper[4834]: E0121 14:54:48.306066 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72\": container with ID starting with 780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72 not found: ID does not exist" containerID="780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.306138 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72"} err="failed to get container status \"780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72\": rpc error: code = NotFound desc = could not find container \"780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72\": container with ID starting with 780137b6fb00f4aa9ae542ac2a47b8d8c80f914089d2d2a8abf9ae31b2e74a72 not found: ID does not exist" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.306186 4834 scope.go:117] "RemoveContainer" containerID="a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be" Jan 21 14:54:48 crc kubenswrapper[4834]: E0121 14:54:48.306977 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be\": container with ID starting with a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be not found: ID does not exist" containerID="a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.307027 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be"} err="failed to get container status \"a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be\": rpc error: code = NotFound desc = could not find container \"a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be\": container with ID starting with a72905fa6ba5c28d50b37863f7afcc10e3cc6ae421f5159600a5cb2e7d5d08be not found: ID does not exist" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.307062 4834 scope.go:117] "RemoveContainer" containerID="1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0" Jan 21 14:54:48 crc kubenswrapper[4834]: E0121 14:54:48.307452 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0\": container with ID starting with 1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0 not found: ID does not exist" containerID="1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.307485 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0"} err="failed to get container status \"1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0\": rpc error: code = NotFound desc = could not find container \"1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0\": container with ID starting with 1c777f28491688e932185dced1a011c660b10729ef635b8eeb69351b93413ef0 not found: ID does not exist" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.307506 4834 scope.go:117] "RemoveContainer" containerID="0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76" Jan 21 14:54:48 crc kubenswrapper[4834]: E0121 14:54:48.308121 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76\": container with ID starting with 0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76 not found: ID does not exist" containerID="0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.308153 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76"} err="failed to get container status \"0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76\": rpc error: code = NotFound desc = could not find container \"0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76\": container with ID starting with 0e35bf293a3959cdc923e96afb039e80a35f52eae4756803a4e6e41fc1918f76 not found: ID does not exist" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.488617 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.507593 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.528009 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:48 crc kubenswrapper[4834]: E0121 14:54:48.528529 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="proxy-httpd" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.528549 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="proxy-httpd" Jan 21 14:54:48 crc kubenswrapper[4834]: E0121 14:54:48.528568 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="sg-core" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.528578 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="sg-core" Jan 21 14:54:48 crc kubenswrapper[4834]: E0121 14:54:48.528590 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="ceilometer-central-agent" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.528598 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="ceilometer-central-agent" Jan 21 14:54:48 crc kubenswrapper[4834]: E0121 14:54:48.528620 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="ceilometer-notification-agent" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.528627 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="ceilometer-notification-agent" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.528855 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="ceilometer-central-agent" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.528880 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="ceilometer-notification-agent" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.528895 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="proxy-httpd" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.528907 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" containerName="sg-core" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.530876 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.533199 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.533903 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.534055 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.539391 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.648115 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-log-httpd\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.648177 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.648250 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.648275 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.648292 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-run-httpd\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.648327 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-scripts\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.648394 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-config-data\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.648501 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tsv\" (UniqueName: \"kubernetes.io/projected/aa044e90-bf24-4e12-99bd-e5e69325dbc0-kube-api-access-78tsv\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.757621 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78tsv\" (UniqueName: \"kubernetes.io/projected/aa044e90-bf24-4e12-99bd-e5e69325dbc0-kube-api-access-78tsv\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.757858 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-log-httpd\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.757964 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.758120 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.758180 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.758216 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-run-httpd\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.758305 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-scripts\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.758412 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-config-data\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.758723 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-log-httpd\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.759859 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-run-httpd\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.764964 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-scripts\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.769962 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.770152 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.770816 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.770898 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-config-data\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.782947 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78tsv\" (UniqueName: \"kubernetes.io/projected/aa044e90-bf24-4e12-99bd-e5e69325dbc0-kube-api-access-78tsv\") pod \"ceilometer-0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " pod="openstack/ceilometer-0" Jan 21 14:54:48 crc kubenswrapper[4834]: I0121 14:54:48.863821 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:54:49 crc kubenswrapper[4834]: I0121 14:54:49.448464 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:54:49 crc kubenswrapper[4834]: W0121 14:54:49.451838 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa044e90_bf24_4e12_99bd_e5e69325dbc0.slice/crio-c17e12bc2deffccb715ad33a9da3cf7a15e59ca1ae78fb8950c18b7629205507 WatchSource:0}: Error finding container c17e12bc2deffccb715ad33a9da3cf7a15e59ca1ae78fb8950c18b7629205507: Status 404 returned error can't find the container with id c17e12bc2deffccb715ad33a9da3cf7a15e59ca1ae78fb8950c18b7629205507 Jan 21 14:54:49 crc kubenswrapper[4834]: I0121 14:54:49.511661 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 14:54:50 crc kubenswrapper[4834]: I0121 14:54:50.155412 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerStarted","Data":"c17e12bc2deffccb715ad33a9da3cf7a15e59ca1ae78fb8950c18b7629205507"} Jan 21 14:54:50 crc kubenswrapper[4834]: I0121 14:54:50.339826 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5031ac-2f93-45be-941a-98044d9b832a" path="/var/lib/kubelet/pods/6f5031ac-2f93-45be-941a-98044d9b832a/volumes" Jan 21 14:54:50 crc kubenswrapper[4834]: I0121 14:54:50.419682 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 14:54:51 crc kubenswrapper[4834]: I0121 14:54:51.445751 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:54:51 crc kubenswrapper[4834]: I0121 14:54:51.446336 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:54:52 crc kubenswrapper[4834]: I0121 14:54:52.178858 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerStarted","Data":"3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac"} Jan 21 14:54:52 crc kubenswrapper[4834]: I0121 14:54:52.463280 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:54:52 crc kubenswrapper[4834]: I0121 14:54:52.463311 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:54:53 crc kubenswrapper[4834]: I0121 14:54:53.194550 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerStarted","Data":"d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab"} Jan 21 14:54:54 crc kubenswrapper[4834]: I0121 14:54:54.206758 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerStarted","Data":"1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5"} Jan 21 14:54:54 crc kubenswrapper[4834]: I0121 14:54:54.512316 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 14:54:54 crc kubenswrapper[4834]: I0121 14:54:54.545111 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 14:54:55 crc kubenswrapper[4834]: I0121 14:54:55.222992 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerStarted","Data":"6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d"} Jan 21 14:54:55 crc kubenswrapper[4834]: I0121 14:54:55.223516 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:54:55 crc kubenswrapper[4834]: I0121 14:54:55.259009 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 14:54:55 crc kubenswrapper[4834]: I0121 14:54:55.269118 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.061745437 podStartE2EDuration="7.269087781s" podCreationTimestamp="2026-01-21 14:54:48 +0000 UTC" firstStartedPulling="2026-01-21 14:54:49.45475564 +0000 UTC m=+1435.429104685" lastFinishedPulling="2026-01-21 14:54:54.662097984 +0000 UTC m=+1440.636447029" observedRunningTime="2026-01-21 14:54:55.257029405 +0000 UTC m=+1441.231378460" watchObservedRunningTime="2026-01-21 14:54:55.269087781 +0000 UTC m=+1441.243436826" Jan 21 14:54:55 crc kubenswrapper[4834]: I0121 14:54:55.489201 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:54:55 crc kubenswrapper[4834]: I0121 14:54:55.489251 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:54:56 crc kubenswrapper[4834]: I0121 14:54:56.573252 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:54:56 crc kubenswrapper[4834]: I0121 14:54:56.573285 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 14:55:01 crc kubenswrapper[4834]: I0121 14:55:01.452069 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:55:01 crc kubenswrapper[4834]: I0121 14:55:01.452901 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:55:01 crc kubenswrapper[4834]: I0121 14:55:01.459463 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:55:01 crc kubenswrapper[4834]: I0121 14:55:01.461089 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.177690 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.297309 4834 generic.go:334] "Generic (PLEG): container finished" podID="fac02b09-9d5c-4da6-afc5-15216ebcddd6" containerID="f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa" exitCode=137 Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.297374 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.297393 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fac02b09-9d5c-4da6-afc5-15216ebcddd6","Type":"ContainerDied","Data":"f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa"} Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.297485 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fac02b09-9d5c-4da6-afc5-15216ebcddd6","Type":"ContainerDied","Data":"39af6e999150af91308cccbd2d207f4b3e6e7e421c56e2c5b5eb23711b60903d"} Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.297519 4834 scope.go:117] "RemoveContainer" containerID="f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.324820 4834 scope.go:117] "RemoveContainer" containerID="f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa" Jan 21 14:55:02 crc kubenswrapper[4834]: E0121 14:55:02.325566 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa\": container with ID starting with f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa not found: ID does not exist" containerID="f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.325699 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa"} err="failed to get container status \"f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa\": rpc error: code = NotFound desc = could not find container \"f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa\": container with ID starting with f6f2ba845eb7761e4025e417ebe877b82492558e20ed5c96b95c2dbe71ac29fa not found: ID does not exist" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.341395 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-combined-ca-bundle\") pod \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.341681 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-config-data\") pod \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.341754 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hgl2\" (UniqueName: \"kubernetes.io/projected/fac02b09-9d5c-4da6-afc5-15216ebcddd6-kube-api-access-5hgl2\") pod \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\" (UID: \"fac02b09-9d5c-4da6-afc5-15216ebcddd6\") " Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.349031 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac02b09-9d5c-4da6-afc5-15216ebcddd6-kube-api-access-5hgl2" (OuterVolumeSpecName: "kube-api-access-5hgl2") pod "fac02b09-9d5c-4da6-afc5-15216ebcddd6" (UID: "fac02b09-9d5c-4da6-afc5-15216ebcddd6"). InnerVolumeSpecName "kube-api-access-5hgl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.374884 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fac02b09-9d5c-4da6-afc5-15216ebcddd6" (UID: "fac02b09-9d5c-4da6-afc5-15216ebcddd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.383511 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-config-data" (OuterVolumeSpecName: "config-data") pod "fac02b09-9d5c-4da6-afc5-15216ebcddd6" (UID: "fac02b09-9d5c-4da6-afc5-15216ebcddd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.445161 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.445995 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac02b09-9d5c-4da6-afc5-15216ebcddd6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.446076 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hgl2\" (UniqueName: \"kubernetes.io/projected/fac02b09-9d5c-4da6-afc5-15216ebcddd6-kube-api-access-5hgl2\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.641017 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.657917 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.686754 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:02 crc kubenswrapper[4834]: E0121 14:55:02.687356 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac02b09-9d5c-4da6-afc5-15216ebcddd6" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.687378 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac02b09-9d5c-4da6-afc5-15216ebcddd6" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.687636 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac02b09-9d5c-4da6-afc5-15216ebcddd6" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.688446 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.698466 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.698970 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.718309 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.752518 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.862730 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.862824 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdm5x\" (UniqueName: \"kubernetes.io/projected/15dc9d10-a46a-4fec-b061-2e72caace933-kube-api-access-cdm5x\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.863021 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.863178 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.863733 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.966521 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.966753 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.966844 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.966901 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdm5x\" (UniqueName: \"kubernetes.io/projected/15dc9d10-a46a-4fec-b061-2e72caace933-kube-api-access-cdm5x\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.967033 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.972235 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.972286 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.975233 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.988963 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdm5x\" (UniqueName: \"kubernetes.io/projected/15dc9d10-a46a-4fec-b061-2e72caace933-kube-api-access-cdm5x\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:02 crc kubenswrapper[4834]: I0121 14:55:02.994075 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:03 crc kubenswrapper[4834]: I0121 14:55:03.084206 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:03 crc kubenswrapper[4834]: I0121 14:55:03.564207 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:55:04 crc kubenswrapper[4834]: I0121 14:55:04.348670 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac02b09-9d5c-4da6-afc5-15216ebcddd6" path="/var/lib/kubelet/pods/fac02b09-9d5c-4da6-afc5-15216ebcddd6/volumes" Jan 21 14:55:04 crc kubenswrapper[4834]: I0121 14:55:04.350376 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"15dc9d10-a46a-4fec-b061-2e72caace933","Type":"ContainerStarted","Data":"bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a"} Jan 21 14:55:04 crc kubenswrapper[4834]: I0121 14:55:04.350423 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"15dc9d10-a46a-4fec-b061-2e72caace933","Type":"ContainerStarted","Data":"fc1492608f385723a53877e72a1382e2f5bc8050dc102c558d3d924800fffc74"} Jan 21 14:55:04 crc kubenswrapper[4834]: I0121 14:55:04.410250 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.410221361 podStartE2EDuration="2.410221361s" podCreationTimestamp="2026-01-21 14:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:04.393271183 +0000 UTC m=+1450.367620238" watchObservedRunningTime="2026-01-21 14:55:04.410221361 +0000 UTC m=+1450.384570406" Jan 21 14:55:05 crc kubenswrapper[4834]: I0121 14:55:05.496643 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:55:05 crc kubenswrapper[4834]: I0121 14:55:05.497352 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:55:05 crc kubenswrapper[4834]: I0121 14:55:05.497414 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:55:05 crc kubenswrapper[4834]: I0121 14:55:05.505401 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.351545 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.360045 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.585666 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snm6f"] Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.587647 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.605521 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snm6f"] Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.763726 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8br\" (UniqueName: \"kubernetes.io/projected/950459bc-faae-4448-8cf7-289275204041-kube-api-access-kr8br\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.763877 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-config\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.764102 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.764191 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.764471 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.764613 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.866689 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr8br\" (UniqueName: \"kubernetes.io/projected/950459bc-faae-4448-8cf7-289275204041-kube-api-access-kr8br\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.866782 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-config\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.866820 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.866840 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.866897 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.866940 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.868034 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.868197 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.868235 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.868373 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.869088 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-config\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.895339 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr8br\" (UniqueName: \"kubernetes.io/projected/950459bc-faae-4448-8cf7-289275204041-kube-api-access-kr8br\") pod \"dnsmasq-dns-fcd6f8f8f-snm6f\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:06 crc kubenswrapper[4834]: I0121 14:55:06.923377 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:07 crc kubenswrapper[4834]: I0121 14:55:07.640010 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snm6f"] Jan 21 14:55:08 crc kubenswrapper[4834]: I0121 14:55:08.084488 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:08 crc kubenswrapper[4834]: I0121 14:55:08.393867 4834 generic.go:334] "Generic (PLEG): container finished" podID="950459bc-faae-4448-8cf7-289275204041" containerID="b8baf943eda50c2543d74dbf54fc9dba0c6d901b97d6e43e93a94ac11756ab06" exitCode=0 Jan 21 14:55:08 crc kubenswrapper[4834]: I0121 14:55:08.394048 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" event={"ID":"950459bc-faae-4448-8cf7-289275204041","Type":"ContainerDied","Data":"b8baf943eda50c2543d74dbf54fc9dba0c6d901b97d6e43e93a94ac11756ab06"} Jan 21 14:55:08 crc kubenswrapper[4834]: I0121 14:55:08.394139 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" event={"ID":"950459bc-faae-4448-8cf7-289275204041","Type":"ContainerStarted","Data":"8dd5ea4d35cc429e85a3f341dec3c325095e84c934b2737f7c79ecf1a2d7be85"} Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.396113 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.397115 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="ceilometer-central-agent" containerID="cri-o://3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac" gracePeriod=30 Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.397601 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="proxy-httpd" containerID="cri-o://6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d" gracePeriod=30 Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.398476 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="ceilometer-notification-agent" containerID="cri-o://d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab" gracePeriod=30 Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.398550 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="sg-core" containerID="cri-o://1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5" gracePeriod=30 Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.413459 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" event={"ID":"950459bc-faae-4448-8cf7-289275204041","Type":"ContainerStarted","Data":"83e77c488c0398699d529ac8576e497423799fcbdb4b72fb82babb210f1861cb"} Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.415566 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.419779 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.197:3000/\": read tcp 10.217.0.2:50956->10.217.0.197:3000: read: connection reset by peer" Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.451798 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" podStartSLOduration=3.451767003 podStartE2EDuration="3.451767003s" podCreationTimestamp="2026-01-21 14:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:09.438437938 +0000 UTC m=+1455.412786993" watchObservedRunningTime="2026-01-21 14:55:09.451767003 +0000 UTC m=+1455.426116068" Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.838074 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.838907 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-log" containerID="cri-o://34385e391fa72b3c6ed74863fb03ed5794c3bb5a50fec4540af17052fcbe131b" gracePeriod=30 Jan 21 14:55:09 crc kubenswrapper[4834]: I0121 14:55:09.839072 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-api" containerID="cri-o://ce0e409f77afad20522b2d5c5697982357ff82fc8d2ba9ccaea1b91f271aeb6d" gracePeriod=30 Jan 21 14:55:10 crc kubenswrapper[4834]: I0121 14:55:10.429982 4834 generic.go:334] "Generic (PLEG): container finished" podID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerID="6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d" exitCode=0 Jan 21 14:55:10 crc kubenswrapper[4834]: I0121 14:55:10.430022 4834 generic.go:334] "Generic (PLEG): container finished" podID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerID="1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5" exitCode=2 Jan 21 14:55:10 crc kubenswrapper[4834]: I0121 14:55:10.430031 4834 generic.go:334] "Generic (PLEG): container finished" podID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerID="3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac" exitCode=0 Jan 21 14:55:10 crc kubenswrapper[4834]: I0121 14:55:10.430082 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerDied","Data":"6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d"} Jan 21 14:55:10 crc kubenswrapper[4834]: I0121 14:55:10.430214 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerDied","Data":"1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5"} Jan 21 14:55:10 crc kubenswrapper[4834]: I0121 14:55:10.430232 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerDied","Data":"3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac"} Jan 21 14:55:10 crc kubenswrapper[4834]: I0121 14:55:10.433386 4834 generic.go:334] "Generic (PLEG): container finished" podID="e6e570e1-c508-4ce9-955a-309724698661" containerID="34385e391fa72b3c6ed74863fb03ed5794c3bb5a50fec4540af17052fcbe131b" exitCode=143 Jan 21 14:55:10 crc kubenswrapper[4834]: I0121 14:55:10.433476 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e570e1-c508-4ce9-955a-309724698661","Type":"ContainerDied","Data":"34385e391fa72b3c6ed74863fb03ed5794c3bb5a50fec4540af17052fcbe131b"} Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.085129 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.109422 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.485348 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.723043 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2mgk7"] Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.724869 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.735329 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.739783 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.748522 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2mgk7"] Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.754471 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfc4q\" (UniqueName: \"kubernetes.io/projected/d4abd259-e33a-45f5-bf8a-88c9828b4877-kube-api-access-tfc4q\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.754571 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.754648 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-config-data\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.754678 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-scripts\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.858862 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.859021 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-config-data\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.859061 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-scripts\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.859144 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfc4q\" (UniqueName: \"kubernetes.io/projected/d4abd259-e33a-45f5-bf8a-88c9828b4877-kube-api-access-tfc4q\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.870698 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-scripts\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.871021 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.871961 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-config-data\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:13 crc kubenswrapper[4834]: I0121 14:55:13.881489 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfc4q\" (UniqueName: \"kubernetes.io/projected/d4abd259-e33a-45f5-bf8a-88c9828b4877-kube-api-access-tfc4q\") pod \"nova-cell1-cell-mapping-2mgk7\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:14 crc kubenswrapper[4834]: I0121 14:55:14.055424 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:14 crc kubenswrapper[4834]: I0121 14:55:14.494851 4834 generic.go:334] "Generic (PLEG): container finished" podID="e6e570e1-c508-4ce9-955a-309724698661" containerID="ce0e409f77afad20522b2d5c5697982357ff82fc8d2ba9ccaea1b91f271aeb6d" exitCode=0 Jan 21 14:55:14 crc kubenswrapper[4834]: I0121 14:55:14.495034 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e570e1-c508-4ce9-955a-309724698661","Type":"ContainerDied","Data":"ce0e409f77afad20522b2d5c5697982357ff82fc8d2ba9ccaea1b91f271aeb6d"} Jan 21 14:55:14 crc kubenswrapper[4834]: I0121 14:55:14.705556 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2mgk7"] Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.208085 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.295142 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-combined-ca-bundle\") pod \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.295232 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-log-httpd\") pod \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.295319 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-run-httpd\") pod \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.295353 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78tsv\" (UniqueName: \"kubernetes.io/projected/aa044e90-bf24-4e12-99bd-e5e69325dbc0-kube-api-access-78tsv\") pod \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.295396 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-scripts\") pod \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.295435 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-ceilometer-tls-certs\") pod \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.295630 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-sg-core-conf-yaml\") pod \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.295754 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-config-data\") pod \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\" (UID: \"aa044e90-bf24-4e12-99bd-e5e69325dbc0\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.296989 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa044e90-bf24-4e12-99bd-e5e69325dbc0" (UID: "aa044e90-bf24-4e12-99bd-e5e69325dbc0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.297337 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa044e90-bf24-4e12-99bd-e5e69325dbc0" (UID: "aa044e90-bf24-4e12-99bd-e5e69325dbc0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.304820 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-scripts" (OuterVolumeSpecName: "scripts") pod "aa044e90-bf24-4e12-99bd-e5e69325dbc0" (UID: "aa044e90-bf24-4e12-99bd-e5e69325dbc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.314293 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa044e90-bf24-4e12-99bd-e5e69325dbc0-kube-api-access-78tsv" (OuterVolumeSpecName: "kube-api-access-78tsv") pod "aa044e90-bf24-4e12-99bd-e5e69325dbc0" (UID: "aa044e90-bf24-4e12-99bd-e5e69325dbc0"). InnerVolumeSpecName "kube-api-access-78tsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.344345 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa044e90-bf24-4e12-99bd-e5e69325dbc0" (UID: "aa044e90-bf24-4e12-99bd-e5e69325dbc0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.358457 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.407476 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.407509 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa044e90-bf24-4e12-99bd-e5e69325dbc0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.407519 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78tsv\" (UniqueName: \"kubernetes.io/projected/aa044e90-bf24-4e12-99bd-e5e69325dbc0-kube-api-access-78tsv\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.407532 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.407541 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.428365 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "aa044e90-bf24-4e12-99bd-e5e69325dbc0" (UID: "aa044e90-bf24-4e12-99bd-e5e69325dbc0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.503045 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa044e90-bf24-4e12-99bd-e5e69325dbc0" (UID: "aa044e90-bf24-4e12-99bd-e5e69325dbc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.515196 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-combined-ca-bundle\") pod \"e6e570e1-c508-4ce9-955a-309724698661\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.515434 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-config-data\") pod \"e6e570e1-c508-4ce9-955a-309724698661\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.515523 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkd2q\" (UniqueName: \"kubernetes.io/projected/e6e570e1-c508-4ce9-955a-309724698661-kube-api-access-tkd2q\") pod \"e6e570e1-c508-4ce9-955a-309724698661\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.515549 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e570e1-c508-4ce9-955a-309724698661-logs\") pod \"e6e570e1-c508-4ce9-955a-309724698661\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.516075 4834 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.516093 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.521883 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e570e1-c508-4ce9-955a-309724698661-logs" (OuterVolumeSpecName: "logs") pod "e6e570e1-c508-4ce9-955a-309724698661" (UID: "e6e570e1-c508-4ce9-955a-309724698661"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.542171 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-config-data" (OuterVolumeSpecName: "config-data") pod "aa044e90-bf24-4e12-99bd-e5e69325dbc0" (UID: "aa044e90-bf24-4e12-99bd-e5e69325dbc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.542509 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2mgk7" event={"ID":"d4abd259-e33a-45f5-bf8a-88c9828b4877","Type":"ContainerStarted","Data":"ca88e7b806a1891e2d431e5d68c2b01022bb4519e1cee82c14b740ffd923edfc"} Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.542548 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2mgk7" event={"ID":"d4abd259-e33a-45f5-bf8a-88c9828b4877","Type":"ContainerStarted","Data":"206feeb7f5ca94967b136786c8daef44a4b55421441ec92bf0c56c8cee0fabb2"} Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.570790 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e570e1-c508-4ce9-955a-309724698661-kube-api-access-tkd2q" (OuterVolumeSpecName: "kube-api-access-tkd2q") pod "e6e570e1-c508-4ce9-955a-309724698661" (UID: "e6e570e1-c508-4ce9-955a-309724698661"). InnerVolumeSpecName "kube-api-access-tkd2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.583823 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2mgk7" podStartSLOduration=2.583751734 podStartE2EDuration="2.583751734s" podCreationTimestamp="2026-01-21 14:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:15.570441129 +0000 UTC m=+1461.544790174" watchObservedRunningTime="2026-01-21 14:55:15.583751734 +0000 UTC m=+1461.558100779" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.619175 4834 generic.go:334] "Generic (PLEG): container finished" podID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerID="d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab" exitCode=0 Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.622336 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.641473 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerDied","Data":"d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab"} Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.641571 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa044e90-bf24-4e12-99bd-e5e69325dbc0","Type":"ContainerDied","Data":"c17e12bc2deffccb715ad33a9da3cf7a15e59ca1ae78fb8950c18b7629205507"} Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.641633 4834 scope.go:117] "RemoveContainer" containerID="6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.689327 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6e570e1-c508-4ce9-955a-309724698661" (UID: "e6e570e1-c508-4ce9-955a-309724698661"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.710886 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6e570e1-c508-4ce9-955a-309724698661","Type":"ContainerDied","Data":"8306d7b2efd620a5c2f65aeca8e0ef96d20e5d12be67fa8038ad21369ee58664"} Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.711115 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.713372 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-combined-ca-bundle\") pod \"e6e570e1-c508-4ce9-955a-309724698661\" (UID: \"e6e570e1-c508-4ce9-955a-309724698661\") " Jan 21 14:55:15 crc kubenswrapper[4834]: W0121 14:55:15.716484 4834 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e6e570e1-c508-4ce9-955a-309724698661/volumes/kubernetes.io~secret/combined-ca-bundle Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.716568 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6e570e1-c508-4ce9-955a-309724698661" (UID: "e6e570e1-c508-4ce9-955a-309724698661"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.733726 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.733771 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa044e90-bf24-4e12-99bd-e5e69325dbc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.733785 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkd2q\" (UniqueName: \"kubernetes.io/projected/e6e570e1-c508-4ce9-955a-309724698661-kube-api-access-tkd2q\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.733809 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6e570e1-c508-4ce9-955a-309724698661-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.851319 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-config-data" (OuterVolumeSpecName: "config-data") pod "e6e570e1-c508-4ce9-955a-309724698661" (UID: "e6e570e1-c508-4ce9-955a-309724698661"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.888310 4834 scope.go:117] "RemoveContainer" containerID="1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5" Jan 21 14:55:15 crc kubenswrapper[4834]: I0121 14:55:15.991969 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e570e1-c508-4ce9-955a-309724698661-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.022154 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.064126 4834 scope.go:117] "RemoveContainer" containerID="d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.073990 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.096211 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.096874 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-api" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.096901 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-api" Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.096947 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="sg-core" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.096957 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="sg-core" Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.096983 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="ceilometer-central-agent" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.096992 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="ceilometer-central-agent" Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.097018 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="ceilometer-notification-agent" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.097026 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="ceilometer-notification-agent" Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.097043 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="proxy-httpd" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.097052 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="proxy-httpd" Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.097063 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-log" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.097072 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-log" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.097444 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="ceilometer-central-agent" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.097465 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-log" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.097478 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="sg-core" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.097493 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e570e1-c508-4ce9-955a-309724698661" containerName="nova-api-api" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.097507 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="proxy-httpd" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.097531 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" containerName="ceilometer-notification-agent" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.101029 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.106135 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.106342 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.106481 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.112912 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.160312 4834 scope.go:117] "RemoveContainer" containerID="3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.163639 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.177150 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.192580 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.194676 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.197632 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-config-data\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.197694 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.197765 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.197823 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-scripts\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.197920 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-run-httpd\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.198003 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6p87\" (UniqueName: \"kubernetes.io/projected/65eff96a-de09-4e96-9fe2-21b1eaedaacc-kube-api-access-z6p87\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.198109 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.198143 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-log-httpd\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.200009 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.202631 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.202914 4834 scope.go:117] "RemoveContainer" containerID="6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.203114 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.203361 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d\": container with ID starting with 6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d not found: ID does not exist" containerID="6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.203389 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d"} err="failed to get container status \"6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d\": rpc error: code = NotFound desc = could not find container \"6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d\": container with ID starting with 6f23d67df999187654569c473c71b0829bab892dab2067ae5745e05c135f906d not found: ID does not exist" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.203413 4834 scope.go:117] "RemoveContainer" containerID="1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5" Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.204694 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5\": container with ID starting with 1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5 not found: ID does not exist" containerID="1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.204729 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5"} err="failed to get container status \"1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5\": rpc error: code = NotFound desc = could not find container \"1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5\": container with ID starting with 1ed1d597f4372e73808957fbabe4cee06eadaab74c965d5daa561fb86431d4f5 not found: ID does not exist" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.204747 4834 scope.go:117] "RemoveContainer" containerID="d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab" Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.205534 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab\": container with ID starting with d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab not found: ID does not exist" containerID="d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.205595 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab"} err="failed to get container status \"d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab\": rpc error: code = NotFound desc = could not find container \"d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab\": container with ID starting with d1c1cc212ebecc983c073f6de71cd430f8f2312020921b9681e804e5b5b9a9ab not found: ID does not exist" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.205614 4834 scope.go:117] "RemoveContainer" containerID="3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac" Jan 21 14:55:16 crc kubenswrapper[4834]: E0121 14:55:16.206297 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac\": container with ID starting with 3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac not found: ID does not exist" containerID="3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.206322 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac"} err="failed to get container status \"3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac\": rpc error: code = NotFound desc = could not find container \"3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac\": container with ID starting with 3444befb9b5429f8f4166ab0ba2d9c89bb367ac8dc35755241d6f52fb5b60fac not found: ID does not exist" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.206365 4834 scope.go:117] "RemoveContainer" containerID="ce0e409f77afad20522b2d5c5697982357ff82fc8d2ba9ccaea1b91f271aeb6d" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.216544 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.247200 4834 scope.go:117] "RemoveContainer" containerID="34385e391fa72b3c6ed74863fb03ed5794c3bb5a50fec4540af17052fcbe131b" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.301982 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-internal-tls-certs\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302150 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df4d8dbd-38f5-484e-a974-aff124ecdf31-logs\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302199 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-config-data\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302243 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-public-tls-certs\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302320 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-config-data\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302346 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302380 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302403 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-scripts\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302433 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-run-httpd\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302460 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkfn4\" (UniqueName: \"kubernetes.io/projected/df4d8dbd-38f5-484e-a974-aff124ecdf31-kube-api-access-gkfn4\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302481 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6p87\" (UniqueName: \"kubernetes.io/projected/65eff96a-de09-4e96-9fe2-21b1eaedaacc-kube-api-access-z6p87\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302499 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302536 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.302556 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-log-httpd\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.303101 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-run-httpd\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.303201 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-log-httpd\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.310375 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.311642 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-config-data\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.311782 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-scripts\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.319702 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.322134 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.324741 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6p87\" (UniqueName: \"kubernetes.io/projected/65eff96a-de09-4e96-9fe2-21b1eaedaacc-kube-api-access-z6p87\") pod \"ceilometer-0\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.354862 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa044e90-bf24-4e12-99bd-e5e69325dbc0" path="/var/lib/kubelet/pods/aa044e90-bf24-4e12-99bd-e5e69325dbc0/volumes" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.358195 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e570e1-c508-4ce9-955a-309724698661" path="/var/lib/kubelet/pods/e6e570e1-c508-4ce9-955a-309724698661/volumes" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.406563 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df4d8dbd-38f5-484e-a974-aff124ecdf31-logs\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.406657 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-config-data\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.406699 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-public-tls-certs\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.406818 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkfn4\" (UniqueName: \"kubernetes.io/projected/df4d8dbd-38f5-484e-a974-aff124ecdf31-kube-api-access-gkfn4\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.406845 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.406891 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-internal-tls-certs\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.410742 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df4d8dbd-38f5-484e-a974-aff124ecdf31-logs\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.411566 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-public-tls-certs\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.414390 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-internal-tls-certs\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.425060 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-config-data\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.430919 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.438844 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkfn4\" (UniqueName: \"kubernetes.io/projected/df4d8dbd-38f5-484e-a974-aff124ecdf31-kube-api-access-gkfn4\") pod \"nova-api-0\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.460269 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.527000 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:16 crc kubenswrapper[4834]: I0121 14:55:16.927362 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.012326 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-vjvxl"] Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.012651 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" podUID="a90468f0-bd39-4ba4-8e2f-78305f0a4f22" containerName="dnsmasq-dns" containerID="cri-o://b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea" gracePeriod=10 Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.113911 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.114178 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.114244 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.186352 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:17 crc kubenswrapper[4834]: W0121 14:55:17.211663 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf4d8dbd_38f5_484e_a974_aff124ecdf31.slice/crio-be433f2ab8e8345042ce8577d92cff5332a56e0ae2e8ba16e898fdb42a4571ec WatchSource:0}: Error finding container be433f2ab8e8345042ce8577d92cff5332a56e0ae2e8ba16e898fdb42a4571ec: Status 404 returned error can't find the container with id be433f2ab8e8345042ce8577d92cff5332a56e0ae2e8ba16e898fdb42a4571ec Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.629060 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.757499 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df4d8dbd-38f5-484e-a974-aff124ecdf31","Type":"ContainerStarted","Data":"be433f2ab8e8345042ce8577d92cff5332a56e0ae2e8ba16e898fdb42a4571ec"} Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.759454 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerStarted","Data":"a0a613f62eca588010824ffbbb3db3da69b63515892c046e5350635a1e2a87e7"} Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.765097 4834 generic.go:334] "Generic (PLEG): container finished" podID="a90468f0-bd39-4ba4-8e2f-78305f0a4f22" containerID="b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea" exitCode=0 Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.765151 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" event={"ID":"a90468f0-bd39-4ba4-8e2f-78305f0a4f22","Type":"ContainerDied","Data":"b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea"} Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.765193 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" event={"ID":"a90468f0-bd39-4ba4-8e2f-78305f0a4f22","Type":"ContainerDied","Data":"9222dccc5039697b99f7ff77d56490f35d06d457600130e0a0f8b417411ae993"} Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.765229 4834 scope.go:117] "RemoveContainer" containerID="b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.765309 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-vjvxl" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.766699 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-sb\") pod \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.766786 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqmfh\" (UniqueName: \"kubernetes.io/projected/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-kube-api-access-dqmfh\") pod \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.766832 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-nb\") pod \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.767031 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-svc\") pod \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.767106 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-config\") pod \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.767143 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-swift-storage-0\") pod \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\" (UID: \"a90468f0-bd39-4ba4-8e2f-78305f0a4f22\") " Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.773192 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-kube-api-access-dqmfh" (OuterVolumeSpecName: "kube-api-access-dqmfh") pod "a90468f0-bd39-4ba4-8e2f-78305f0a4f22" (UID: "a90468f0-bd39-4ba4-8e2f-78305f0a4f22"). InnerVolumeSpecName "kube-api-access-dqmfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.827135 4834 scope.go:117] "RemoveContainer" containerID="3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.873152 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqmfh\" (UniqueName: \"kubernetes.io/projected/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-kube-api-access-dqmfh\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.873980 4834 scope.go:117] "RemoveContainer" containerID="b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea" Jan 21 14:55:17 crc kubenswrapper[4834]: E0121 14:55:17.879488 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea\": container with ID starting with b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea not found: ID does not exist" containerID="b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.879600 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea"} err="failed to get container status \"b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea\": rpc error: code = NotFound desc = could not find container \"b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea\": container with ID starting with b0f30c717f4b1aa8a03e1abee5a57fe27e8b81df8bc65ed15894eb31ef0065ea not found: ID does not exist" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.879647 4834 scope.go:117] "RemoveContainer" containerID="3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.881367 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a90468f0-bd39-4ba4-8e2f-78305f0a4f22" (UID: "a90468f0-bd39-4ba4-8e2f-78305f0a4f22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:17 crc kubenswrapper[4834]: E0121 14:55:17.882336 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210\": container with ID starting with 3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210 not found: ID does not exist" containerID="3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.882378 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210"} err="failed to get container status \"3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210\": rpc error: code = NotFound desc = could not find container \"3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210\": container with ID starting with 3725d7f07a8b26c837ebe797237ad2624284a00e7f2ce3b913cde3e5836db210 not found: ID does not exist" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.889217 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a90468f0-bd39-4ba4-8e2f-78305f0a4f22" (UID: "a90468f0-bd39-4ba4-8e2f-78305f0a4f22"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.896498 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a90468f0-bd39-4ba4-8e2f-78305f0a4f22" (UID: "a90468f0-bd39-4ba4-8e2f-78305f0a4f22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.899452 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a90468f0-bd39-4ba4-8e2f-78305f0a4f22" (UID: "a90468f0-bd39-4ba4-8e2f-78305f0a4f22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.914675 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-config" (OuterVolumeSpecName: "config") pod "a90468f0-bd39-4ba4-8e2f-78305f0a4f22" (UID: "a90468f0-bd39-4ba4-8e2f-78305f0a4f22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.976574 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.976658 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.976671 4834 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.976702 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:17 crc kubenswrapper[4834]: I0121 14:55:17.976715 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a90468f0-bd39-4ba4-8e2f-78305f0a4f22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:18 crc kubenswrapper[4834]: I0121 14:55:18.158632 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-vjvxl"] Jan 21 14:55:18 crc kubenswrapper[4834]: I0121 14:55:18.171372 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-vjvxl"] Jan 21 14:55:18 crc kubenswrapper[4834]: I0121 14:55:18.340021 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90468f0-bd39-4ba4-8e2f-78305f0a4f22" path="/var/lib/kubelet/pods/a90468f0-bd39-4ba4-8e2f-78305f0a4f22/volumes" Jan 21 14:55:18 crc kubenswrapper[4834]: I0121 14:55:18.781455 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df4d8dbd-38f5-484e-a974-aff124ecdf31","Type":"ContainerStarted","Data":"23032af459c792d2d217d70962e34238928147320ddb78418d19196da78402c6"} Jan 21 14:55:18 crc kubenswrapper[4834]: I0121 14:55:18.782887 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df4d8dbd-38f5-484e-a974-aff124ecdf31","Type":"ContainerStarted","Data":"6bd0ee0e50249810c726c6d35347da2ba9136eb8cf002babdd0f2924265fa223"} Jan 21 14:55:18 crc kubenswrapper[4834]: I0121 14:55:18.787075 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerStarted","Data":"19c506fe91677f38040b6ac9abfe39213d5919b09eeccd51a69bcdebc4c4dc90"} Jan 21 14:55:18 crc kubenswrapper[4834]: I0121 14:55:18.809167 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.809138936 podStartE2EDuration="2.809138936s" podCreationTimestamp="2026-01-21 14:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:18.80636787 +0000 UTC m=+1464.780716925" watchObservedRunningTime="2026-01-21 14:55:18.809138936 +0000 UTC m=+1464.783487991" Jan 21 14:55:19 crc kubenswrapper[4834]: I0121 14:55:19.806910 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerStarted","Data":"107a731e7f2664d7f9aa9d644b794b0125c9dda7913148f699b179b99aae879d"} Jan 21 14:55:20 crc kubenswrapper[4834]: I0121 14:55:20.819718 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerStarted","Data":"cdd9893db2d5fb4b6116c9182d5161ce755afa8f174d7fe35f8d4c29d5c0807e"} Jan 21 14:55:21 crc kubenswrapper[4834]: I0121 14:55:21.868496 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerStarted","Data":"4c978202772d336f5ede7c461a6d979b857f71449d109c11375fecceb91b59eb"} Jan 21 14:55:21 crc kubenswrapper[4834]: I0121 14:55:21.869652 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 14:55:22 crc kubenswrapper[4834]: I0121 14:55:22.880968 4834 generic.go:334] "Generic (PLEG): container finished" podID="d4abd259-e33a-45f5-bf8a-88c9828b4877" containerID="ca88e7b806a1891e2d431e5d68c2b01022bb4519e1cee82c14b740ffd923edfc" exitCode=0 Jan 21 14:55:22 crc kubenswrapper[4834]: I0121 14:55:22.881077 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2mgk7" event={"ID":"d4abd259-e33a-45f5-bf8a-88c9828b4877","Type":"ContainerDied","Data":"ca88e7b806a1891e2d431e5d68c2b01022bb4519e1cee82c14b740ffd923edfc"} Jan 21 14:55:22 crc kubenswrapper[4834]: I0121 14:55:22.904050 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.782862677 podStartE2EDuration="6.904021897s" podCreationTimestamp="2026-01-21 14:55:16 +0000 UTC" firstStartedPulling="2026-01-21 14:55:17.144159151 +0000 UTC m=+1463.118508196" lastFinishedPulling="2026-01-21 14:55:21.265318371 +0000 UTC m=+1467.239667416" observedRunningTime="2026-01-21 14:55:21.898498233 +0000 UTC m=+1467.872847278" watchObservedRunningTime="2026-01-21 14:55:22.904021897 +0000 UTC m=+1468.878370942" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.318898 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.459232 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-scripts\") pod \"d4abd259-e33a-45f5-bf8a-88c9828b4877\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.460229 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-combined-ca-bundle\") pod \"d4abd259-e33a-45f5-bf8a-88c9828b4877\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.460640 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfc4q\" (UniqueName: \"kubernetes.io/projected/d4abd259-e33a-45f5-bf8a-88c9828b4877-kube-api-access-tfc4q\") pod \"d4abd259-e33a-45f5-bf8a-88c9828b4877\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.460745 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-config-data\") pod \"d4abd259-e33a-45f5-bf8a-88c9828b4877\" (UID: \"d4abd259-e33a-45f5-bf8a-88c9828b4877\") " Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.469856 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4abd259-e33a-45f5-bf8a-88c9828b4877-kube-api-access-tfc4q" (OuterVolumeSpecName: "kube-api-access-tfc4q") pod "d4abd259-e33a-45f5-bf8a-88c9828b4877" (UID: "d4abd259-e33a-45f5-bf8a-88c9828b4877"). InnerVolumeSpecName "kube-api-access-tfc4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.484265 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-scripts" (OuterVolumeSpecName: "scripts") pod "d4abd259-e33a-45f5-bf8a-88c9828b4877" (UID: "d4abd259-e33a-45f5-bf8a-88c9828b4877"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.516939 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-config-data" (OuterVolumeSpecName: "config-data") pod "d4abd259-e33a-45f5-bf8a-88c9828b4877" (UID: "d4abd259-e33a-45f5-bf8a-88c9828b4877"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.536170 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4abd259-e33a-45f5-bf8a-88c9828b4877" (UID: "d4abd259-e33a-45f5-bf8a-88c9828b4877"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.563961 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfc4q\" (UniqueName: \"kubernetes.io/projected/d4abd259-e33a-45f5-bf8a-88c9828b4877-kube-api-access-tfc4q\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.564003 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.564014 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.564023 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4abd259-e33a-45f5-bf8a-88c9828b4877-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.905276 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2mgk7" event={"ID":"d4abd259-e33a-45f5-bf8a-88c9828b4877","Type":"ContainerDied","Data":"206feeb7f5ca94967b136786c8daef44a4b55421441ec92bf0c56c8cee0fabb2"} Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.905782 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2mgk7" Jan 21 14:55:24 crc kubenswrapper[4834]: I0121 14:55:24.905875 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="206feeb7f5ca94967b136786c8daef44a4b55421441ec92bf0c56c8cee0fabb2" Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.110867 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.111298 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerName="nova-api-log" containerID="cri-o://6bd0ee0e50249810c726c6d35347da2ba9136eb8cf002babdd0f2924265fa223" gracePeriod=30 Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.111419 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerName="nova-api-api" containerID="cri-o://23032af459c792d2d217d70962e34238928147320ddb78418d19196da78402c6" gracePeriod=30 Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.174107 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.174775 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" containerName="nova-scheduler-scheduler" containerID="cri-o://d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c" gracePeriod=30 Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.193827 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.194942 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-metadata" containerID="cri-o://84a0f1eceea518b2b97035502f7b884cb1d7a20924dffc4b15347ba3c4296bbf" gracePeriod=30 Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.205050 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-log" containerID="cri-o://8548e9c3f056e8fa0ba7a51bbdd28745fa945e40cc8f2673cf28cb7d9c2a7e43" gracePeriod=30 Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.923319 4834 generic.go:334] "Generic (PLEG): container finished" podID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerID="23032af459c792d2d217d70962e34238928147320ddb78418d19196da78402c6" exitCode=0 Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.923365 4834 generic.go:334] "Generic (PLEG): container finished" podID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerID="6bd0ee0e50249810c726c6d35347da2ba9136eb8cf002babdd0f2924265fa223" exitCode=143 Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.923450 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df4d8dbd-38f5-484e-a974-aff124ecdf31","Type":"ContainerDied","Data":"23032af459c792d2d217d70962e34238928147320ddb78418d19196da78402c6"} Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.923480 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df4d8dbd-38f5-484e-a974-aff124ecdf31","Type":"ContainerDied","Data":"6bd0ee0e50249810c726c6d35347da2ba9136eb8cf002babdd0f2924265fa223"} Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.928764 4834 generic.go:334] "Generic (PLEG): container finished" podID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerID="8548e9c3f056e8fa0ba7a51bbdd28745fa945e40cc8f2673cf28cb7d9c2a7e43" exitCode=143 Jan 21 14:55:25 crc kubenswrapper[4834]: I0121 14:55:25.928949 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7be56b32-2bc3-4e0a-9098-ed19bc90187d","Type":"ContainerDied","Data":"8548e9c3f056e8fa0ba7a51bbdd28745fa945e40cc8f2673cf28cb7d9c2a7e43"} Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.172777 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.339629 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-public-tls-certs\") pod \"df4d8dbd-38f5-484e-a974-aff124ecdf31\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.339744 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-config-data\") pod \"df4d8dbd-38f5-484e-a974-aff124ecdf31\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.339828 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df4d8dbd-38f5-484e-a974-aff124ecdf31-logs\") pod \"df4d8dbd-38f5-484e-a974-aff124ecdf31\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.339882 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkfn4\" (UniqueName: \"kubernetes.io/projected/df4d8dbd-38f5-484e-a974-aff124ecdf31-kube-api-access-gkfn4\") pod \"df4d8dbd-38f5-484e-a974-aff124ecdf31\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.340042 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-internal-tls-certs\") pod \"df4d8dbd-38f5-484e-a974-aff124ecdf31\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.340156 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-combined-ca-bundle\") pod \"df4d8dbd-38f5-484e-a974-aff124ecdf31\" (UID: \"df4d8dbd-38f5-484e-a974-aff124ecdf31\") " Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.341009 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df4d8dbd-38f5-484e-a974-aff124ecdf31-logs" (OuterVolumeSpecName: "logs") pod "df4d8dbd-38f5-484e-a974-aff124ecdf31" (UID: "df4d8dbd-38f5-484e-a974-aff124ecdf31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.357127 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4d8dbd-38f5-484e-a974-aff124ecdf31-kube-api-access-gkfn4" (OuterVolumeSpecName: "kube-api-access-gkfn4") pod "df4d8dbd-38f5-484e-a974-aff124ecdf31" (UID: "df4d8dbd-38f5-484e-a974-aff124ecdf31"). InnerVolumeSpecName "kube-api-access-gkfn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.376814 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-config-data" (OuterVolumeSpecName: "config-data") pod "df4d8dbd-38f5-484e-a974-aff124ecdf31" (UID: "df4d8dbd-38f5-484e-a974-aff124ecdf31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.411808 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df4d8dbd-38f5-484e-a974-aff124ecdf31" (UID: "df4d8dbd-38f5-484e-a974-aff124ecdf31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.435784 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "df4d8dbd-38f5-484e-a974-aff124ecdf31" (UID: "df4d8dbd-38f5-484e-a974-aff124ecdf31"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.436514 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "df4d8dbd-38f5-484e-a974-aff124ecdf31" (UID: "df4d8dbd-38f5-484e-a974-aff124ecdf31"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.444389 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.444443 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.444457 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.444474 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df4d8dbd-38f5-484e-a974-aff124ecdf31-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.444485 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df4d8dbd-38f5-484e-a974-aff124ecdf31-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.444496 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkfn4\" (UniqueName: \"kubernetes.io/projected/df4d8dbd-38f5-484e-a974-aff124ecdf31-kube-api-access-gkfn4\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.950302 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df4d8dbd-38f5-484e-a974-aff124ecdf31","Type":"ContainerDied","Data":"be433f2ab8e8345042ce8577d92cff5332a56e0ae2e8ba16e898fdb42a4571ec"} Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.950372 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.950382 4834 scope.go:117] "RemoveContainer" containerID="23032af459c792d2d217d70962e34238928147320ddb78418d19196da78402c6" Jan 21 14:55:26 crc kubenswrapper[4834]: I0121 14:55:26.994706 4834 scope.go:117] "RemoveContainer" containerID="6bd0ee0e50249810c726c6d35347da2ba9136eb8cf002babdd0f2924265fa223" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.009041 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.041041 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.054249 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:27 crc kubenswrapper[4834]: E0121 14:55:27.054860 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerName="nova-api-api" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.054888 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerName="nova-api-api" Jan 21 14:55:27 crc kubenswrapper[4834]: E0121 14:55:27.054914 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90468f0-bd39-4ba4-8e2f-78305f0a4f22" containerName="init" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.054924 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90468f0-bd39-4ba4-8e2f-78305f0a4f22" containerName="init" Jan 21 14:55:27 crc kubenswrapper[4834]: E0121 14:55:27.054960 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerName="nova-api-log" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.054969 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerName="nova-api-log" Jan 21 14:55:27 crc kubenswrapper[4834]: E0121 14:55:27.054989 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4abd259-e33a-45f5-bf8a-88c9828b4877" containerName="nova-manage" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.054997 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4abd259-e33a-45f5-bf8a-88c9828b4877" containerName="nova-manage" Jan 21 14:55:27 crc kubenswrapper[4834]: E0121 14:55:27.055022 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90468f0-bd39-4ba4-8e2f-78305f0a4f22" containerName="dnsmasq-dns" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.055029 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90468f0-bd39-4ba4-8e2f-78305f0a4f22" containerName="dnsmasq-dns" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.055242 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4abd259-e33a-45f5-bf8a-88c9828b4877" containerName="nova-manage" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.055262 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerName="nova-api-log" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.055278 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4d8dbd-38f5-484e-a974-aff124ecdf31" containerName="nova-api-api" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.055285 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90468f0-bd39-4ba4-8e2f-78305f0a4f22" containerName="dnsmasq-dns" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.056517 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.062131 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.062305 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.062667 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.084224 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.159959 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10412867-64ac-413b-8f2f-9bdac2bb8759-logs\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.160099 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-config-data\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.160124 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcslj\" (UniqueName: \"kubernetes.io/projected/10412867-64ac-413b-8f2f-9bdac2bb8759-kube-api-access-bcslj\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.160144 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.160208 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-public-tls-certs\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.160275 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-internal-tls-certs\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.262387 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10412867-64ac-413b-8f2f-9bdac2bb8759-logs\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.262488 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-config-data\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.262510 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcslj\" (UniqueName: \"kubernetes.io/projected/10412867-64ac-413b-8f2f-9bdac2bb8759-kube-api-access-bcslj\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.262531 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.262578 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-public-tls-certs\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.262623 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-internal-tls-certs\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.262866 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10412867-64ac-413b-8f2f-9bdac2bb8759-logs\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.269946 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-public-tls-certs\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.270494 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.271606 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-internal-tls-certs\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.285782 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcslj\" (UniqueName: \"kubernetes.io/projected/10412867-64ac-413b-8f2f-9bdac2bb8759-kube-api-access-bcslj\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.285895 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-config-data\") pod \"nova-api-0\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.390453 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.902255 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:55:27 crc kubenswrapper[4834]: W0121 14:55:27.903789 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10412867_64ac_413b_8f2f_9bdac2bb8759.slice/crio-ecfb1fda24285175bc271aaaee2c4805656dbdbfade0205a6a7fcb99123e2a05 WatchSource:0}: Error finding container ecfb1fda24285175bc271aaaee2c4805656dbdbfade0205a6a7fcb99123e2a05: Status 404 returned error can't find the container with id ecfb1fda24285175bc271aaaee2c4805656dbdbfade0205a6a7fcb99123e2a05 Jan 21 14:55:27 crc kubenswrapper[4834]: I0121 14:55:27.963953 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10412867-64ac-413b-8f2f-9bdac2bb8759","Type":"ContainerStarted","Data":"ecfb1fda24285175bc271aaaee2c4805656dbdbfade0205a6a7fcb99123e2a05"} Jan 21 14:55:28 crc kubenswrapper[4834]: I0121 14:55:28.344742 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4d8dbd-38f5-484e-a974-aff124ecdf31" path="/var/lib/kubelet/pods/df4d8dbd-38f5-484e-a974-aff124ecdf31/volumes" Jan 21 14:55:28 crc kubenswrapper[4834]: I0121 14:55:28.979002 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10412867-64ac-413b-8f2f-9bdac2bb8759","Type":"ContainerStarted","Data":"008e24c6de4a7972abcecfe67f07648df9fa7fff4e253229a970cbbe16f3e832"} Jan 21 14:55:28 crc kubenswrapper[4834]: I0121 14:55:28.979548 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10412867-64ac-413b-8f2f-9bdac2bb8759","Type":"ContainerStarted","Data":"dfd1d1f4b01cbe2d02e24b2d79800e43e22f1f1484f9243a7619cca4445cac51"} Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.012489 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.012463293 podStartE2EDuration="2.012463293s" podCreationTimestamp="2026-01-21 14:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:29.010473222 +0000 UTC m=+1474.984822267" watchObservedRunningTime="2026-01-21 14:55:29.012463293 +0000 UTC m=+1474.986812338" Jan 21 14:55:29 crc kubenswrapper[4834]: E0121 14:55:29.514657 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c is running failed: container process not found" containerID="d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:55:29 crc kubenswrapper[4834]: E0121 14:55:29.516539 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c is running failed: container process not found" containerID="d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:55:29 crc kubenswrapper[4834]: E0121 14:55:29.517430 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c is running failed: container process not found" containerID="d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:55:29 crc kubenswrapper[4834]: E0121 14:55:29.517475 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" containerName="nova-scheduler-scheduler" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.623963 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:39174->10.217.0.194:8775: read: connection reset by peer" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.623963 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:39180->10.217.0.194:8775: read: connection reset by peer" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.807212 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.836464 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-config-data\") pod \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.836657 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-combined-ca-bundle\") pod \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.837017 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5knfq\" (UniqueName: \"kubernetes.io/projected/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-kube-api-access-5knfq\") pod \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\" (UID: \"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e\") " Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.855202 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-kube-api-access-5knfq" (OuterVolumeSpecName: "kube-api-access-5knfq") pod "93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" (UID: "93d641d5-b37a-49c7-9c13-0f9edfc6fe3e"). InnerVolumeSpecName "kube-api-access-5knfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:29 crc kubenswrapper[4834]: E0121 14:55:29.856966 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be56b32_2bc3_4e0a_9098_ed19bc90187d.slice/crio-conmon-84a0f1eceea518b2b97035502f7b884cb1d7a20924dffc4b15347ba3c4296bbf.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.901626 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-config-data" (OuterVolumeSpecName: "config-data") pod "93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" (UID: "93d641d5-b37a-49c7-9c13-0f9edfc6fe3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.903586 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" (UID: "93d641d5-b37a-49c7-9c13-0f9edfc6fe3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.946198 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5knfq\" (UniqueName: \"kubernetes.io/projected/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-kube-api-access-5knfq\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.946752 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.946765 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.994665 4834 generic.go:334] "Generic (PLEG): container finished" podID="93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" containerID="d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c" exitCode=0 Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.994772 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e","Type":"ContainerDied","Data":"d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c"} Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.994820 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93d641d5-b37a-49c7-9c13-0f9edfc6fe3e","Type":"ContainerDied","Data":"2f118b56f69baad63a17e8e4c483235717c9765ca05a36320ee43f6800307c49"} Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.994846 4834 scope.go:117] "RemoveContainer" containerID="d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c" Jan 21 14:55:29 crc kubenswrapper[4834]: I0121 14:55:29.995127 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.001473 4834 generic.go:334] "Generic (PLEG): container finished" podID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerID="84a0f1eceea518b2b97035502f7b884cb1d7a20924dffc4b15347ba3c4296bbf" exitCode=0 Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.001605 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7be56b32-2bc3-4e0a-9098-ed19bc90187d","Type":"ContainerDied","Data":"84a0f1eceea518b2b97035502f7b884cb1d7a20924dffc4b15347ba3c4296bbf"} Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.041669 4834 scope.go:117] "RemoveContainer" containerID="d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c" Jan 21 14:55:30 crc kubenswrapper[4834]: E0121 14:55:30.045668 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c\": container with ID starting with d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c not found: ID does not exist" containerID="d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.045733 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c"} err="failed to get container status \"d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c\": rpc error: code = NotFound desc = could not find container \"d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c\": container with ID starting with d46ad725b1c45e015dd7cafa292d790548c11e77ea1a1bc4b0c5b47cbe2f1d9c not found: ID does not exist" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.046505 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.061320 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.064893 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.128564 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:30 crc kubenswrapper[4834]: E0121 14:55:30.129401 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" containerName="nova-scheduler-scheduler" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.129428 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" containerName="nova-scheduler-scheduler" Jan 21 14:55:30 crc kubenswrapper[4834]: E0121 14:55:30.129449 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-metadata" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.129457 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-metadata" Jan 21 14:55:30 crc kubenswrapper[4834]: E0121 14:55:30.129483 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-log" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.129491 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-log" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.129734 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-metadata" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.129756 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" containerName="nova-scheduler-scheduler" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.129778 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" containerName="nova-metadata-log" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.130864 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.137244 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.139647 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.150992 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9qpr\" (UniqueName: \"kubernetes.io/projected/7be56b32-2bc3-4e0a-9098-ed19bc90187d-kube-api-access-c9qpr\") pod \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.151149 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be56b32-2bc3-4e0a-9098-ed19bc90187d-logs\") pod \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.151240 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-config-data\") pod \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.151288 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-combined-ca-bundle\") pod \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.151336 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-nova-metadata-tls-certs\") pod \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\" (UID: \"7be56b32-2bc3-4e0a-9098-ed19bc90187d\") " Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.151484 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-config-data\") pod \"nova-scheduler-0\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.151604 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dgr\" (UniqueName: \"kubernetes.io/projected/e9503bd6-1084-408a-8e1d-65d66dab4170-kube-api-access-d8dgr\") pod \"nova-scheduler-0\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.151650 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.160264 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be56b32-2bc3-4e0a-9098-ed19bc90187d-kube-api-access-c9qpr" (OuterVolumeSpecName: "kube-api-access-c9qpr") pod "7be56b32-2bc3-4e0a-9098-ed19bc90187d" (UID: "7be56b32-2bc3-4e0a-9098-ed19bc90187d"). InnerVolumeSpecName "kube-api-access-c9qpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.160838 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be56b32-2bc3-4e0a-9098-ed19bc90187d-logs" (OuterVolumeSpecName: "logs") pod "7be56b32-2bc3-4e0a-9098-ed19bc90187d" (UID: "7be56b32-2bc3-4e0a-9098-ed19bc90187d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.207449 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7be56b32-2bc3-4e0a-9098-ed19bc90187d" (UID: "7be56b32-2bc3-4e0a-9098-ed19bc90187d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.230619 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-config-data" (OuterVolumeSpecName: "config-data") pod "7be56b32-2bc3-4e0a-9098-ed19bc90187d" (UID: "7be56b32-2bc3-4e0a-9098-ed19bc90187d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.255229 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dgr\" (UniqueName: \"kubernetes.io/projected/e9503bd6-1084-408a-8e1d-65d66dab4170-kube-api-access-d8dgr\") pod \"nova-scheduler-0\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.255299 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.255390 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-config-data\") pod \"nova-scheduler-0\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.255464 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9qpr\" (UniqueName: \"kubernetes.io/projected/7be56b32-2bc3-4e0a-9098-ed19bc90187d-kube-api-access-c9qpr\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.255477 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7be56b32-2bc3-4e0a-9098-ed19bc90187d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.255487 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.255496 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.255322 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7be56b32-2bc3-4e0a-9098-ed19bc90187d" (UID: "7be56b32-2bc3-4e0a-9098-ed19bc90187d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.261702 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-config-data\") pod \"nova-scheduler-0\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.265914 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.276426 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dgr\" (UniqueName: \"kubernetes.io/projected/e9503bd6-1084-408a-8e1d-65d66dab4170-kube-api-access-d8dgr\") pod \"nova-scheduler-0\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.336141 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d641d5-b37a-49c7-9c13-0f9edfc6fe3e" path="/var/lib/kubelet/pods/93d641d5-b37a-49c7-9c13-0f9edfc6fe3e/volumes" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.357086 4834 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be56b32-2bc3-4e0a-9098-ed19bc90187d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.469950 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:55:30 crc kubenswrapper[4834]: I0121 14:55:30.956979 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:55:30 crc kubenswrapper[4834]: W0121 14:55:30.971065 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9503bd6_1084_408a_8e1d_65d66dab4170.slice/crio-4fcd2fdb75458ce100d6843a7ade251b39879b70b2374f74fb8e4bca6aff8087 WatchSource:0}: Error finding container 4fcd2fdb75458ce100d6843a7ade251b39879b70b2374f74fb8e4bca6aff8087: Status 404 returned error can't find the container with id 4fcd2fdb75458ce100d6843a7ade251b39879b70b2374f74fb8e4bca6aff8087 Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.020186 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7be56b32-2bc3-4e0a-9098-ed19bc90187d","Type":"ContainerDied","Data":"c8b5185e3bc8f0d42c1dbc320c5b8375560ac15b34732f442797e32ce287fb93"} Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.020260 4834 scope.go:117] "RemoveContainer" containerID="84a0f1eceea518b2b97035502f7b884cb1d7a20924dffc4b15347ba3c4296bbf" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.020288 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.022600 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9503bd6-1084-408a-8e1d-65d66dab4170","Type":"ContainerStarted","Data":"4fcd2fdb75458ce100d6843a7ade251b39879b70b2374f74fb8e4bca6aff8087"} Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.115408 4834 scope.go:117] "RemoveContainer" containerID="8548e9c3f056e8fa0ba7a51bbdd28745fa945e40cc8f2673cf28cb7d9c2a7e43" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.203368 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.215787 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.230243 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.232501 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.240012 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.240341 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.245498 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.280293 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk6r6\" (UniqueName: \"kubernetes.io/projected/615eb241-8fa5-4c76-b710-19a3bd65e0ac-kube-api-access-fk6r6\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.280360 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-config-data\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.280387 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.280463 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.280547 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/615eb241-8fa5-4c76-b710-19a3bd65e0ac-logs\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.383304 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk6r6\" (UniqueName: \"kubernetes.io/projected/615eb241-8fa5-4c76-b710-19a3bd65e0ac-kube-api-access-fk6r6\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.385010 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-config-data\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.385257 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.386016 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.386248 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/615eb241-8fa5-4c76-b710-19a3bd65e0ac-logs\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.387804 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/615eb241-8fa5-4c76-b710-19a3bd65e0ac-logs\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.391482 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.392416 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.395219 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-config-data\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.411410 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk6r6\" (UniqueName: \"kubernetes.io/projected/615eb241-8fa5-4c76-b710-19a3bd65e0ac-kube-api-access-fk6r6\") pod \"nova-metadata-0\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " pod="openstack/nova-metadata-0" Jan 21 14:55:31 crc kubenswrapper[4834]: I0121 14:55:31.561881 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:55:32 crc kubenswrapper[4834]: W0121 14:55:32.032327 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod615eb241_8fa5_4c76_b710_19a3bd65e0ac.slice/crio-2b16fa379e52ce431a6c58ef3562518a3af3479928cd05e053243b7049a84491 WatchSource:0}: Error finding container 2b16fa379e52ce431a6c58ef3562518a3af3479928cd05e053243b7049a84491: Status 404 returned error can't find the container with id 2b16fa379e52ce431a6c58ef3562518a3af3479928cd05e053243b7049a84491 Jan 21 14:55:32 crc kubenswrapper[4834]: I0121 14:55:32.040217 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9503bd6-1084-408a-8e1d-65d66dab4170","Type":"ContainerStarted","Data":"8d770e00807ea81d0baadc2453ac237e13a01ff3a0bba99f60d58f1c729e3861"} Jan 21 14:55:32 crc kubenswrapper[4834]: I0121 14:55:32.043413 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:55:32 crc kubenswrapper[4834]: I0121 14:55:32.067905 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.067871144 podStartE2EDuration="2.067871144s" podCreationTimestamp="2026-01-21 14:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:32.060328959 +0000 UTC m=+1478.034678004" watchObservedRunningTime="2026-01-21 14:55:32.067871144 +0000 UTC m=+1478.042220189" Jan 21 14:55:32 crc kubenswrapper[4834]: I0121 14:55:32.337051 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be56b32-2bc3-4e0a-9098-ed19bc90187d" path="/var/lib/kubelet/pods/7be56b32-2bc3-4e0a-9098-ed19bc90187d/volumes" Jan 21 14:55:33 crc kubenswrapper[4834]: I0121 14:55:33.062039 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"615eb241-8fa5-4c76-b710-19a3bd65e0ac","Type":"ContainerStarted","Data":"1c0cc5a6d21ee15bd2b37944b66a859d2a28a82ee9266a7e79976bf6ab2c55a8"} Jan 21 14:55:33 crc kubenswrapper[4834]: I0121 14:55:33.062595 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"615eb241-8fa5-4c76-b710-19a3bd65e0ac","Type":"ContainerStarted","Data":"2b16fa379e52ce431a6c58ef3562518a3af3479928cd05e053243b7049a84491"} Jan 21 14:55:34 crc kubenswrapper[4834]: I0121 14:55:34.074844 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"615eb241-8fa5-4c76-b710-19a3bd65e0ac","Type":"ContainerStarted","Data":"85378de999d4b746f726db38c15173ce2ec1d4f11248e24fa1ad5e385ff441aa"} Jan 21 14:55:34 crc kubenswrapper[4834]: I0121 14:55:34.116821 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.11678746 podStartE2EDuration="3.11678746s" podCreationTimestamp="2026-01-21 14:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:55:34.094778185 +0000 UTC m=+1480.069127230" watchObservedRunningTime="2026-01-21 14:55:34.11678746 +0000 UTC m=+1480.091136505" Jan 21 14:55:35 crc kubenswrapper[4834]: I0121 14:55:35.470045 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 14:55:36 crc kubenswrapper[4834]: I0121 14:55:36.562486 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:55:36 crc kubenswrapper[4834]: I0121 14:55:36.563037 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 14:55:37 crc kubenswrapper[4834]: I0121 14:55:37.391681 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:55:37 crc kubenswrapper[4834]: I0121 14:55:37.391797 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 14:55:38 crc kubenswrapper[4834]: I0121 14:55:38.409394 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:55:38 crc kubenswrapper[4834]: I0121 14:55:38.409512 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:55:40 crc kubenswrapper[4834]: I0121 14:55:40.471246 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 14:55:40 crc kubenswrapper[4834]: I0121 14:55:40.501453 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4834]: I0121 14:55:41.219663 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 14:55:41 crc kubenswrapper[4834]: I0121 14:55:41.563203 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:55:41 crc kubenswrapper[4834]: I0121 14:55:41.563302 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 14:55:42 crc kubenswrapper[4834]: I0121 14:55:42.576221 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:55:42 crc kubenswrapper[4834]: I0121 14:55:42.576291 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 14:55:46 crc kubenswrapper[4834]: I0121 14:55:46.474253 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 14:55:47 crc kubenswrapper[4834]: I0121 14:55:47.114597 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:55:47 crc kubenswrapper[4834]: I0121 14:55:47.115107 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:55:47 crc kubenswrapper[4834]: I0121 14:55:47.400116 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:55:47 crc kubenswrapper[4834]: I0121 14:55:47.400215 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 14:55:47 crc kubenswrapper[4834]: I0121 14:55:47.401065 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:55:47 crc kubenswrapper[4834]: I0121 14:55:47.401136 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 14:55:47 crc kubenswrapper[4834]: I0121 14:55:47.409658 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:55:47 crc kubenswrapper[4834]: I0121 14:55:47.409736 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 14:55:51 crc kubenswrapper[4834]: I0121 14:55:51.570399 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:55:51 crc kubenswrapper[4834]: I0121 14:55:51.574630 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 14:55:51 crc kubenswrapper[4834]: I0121 14:55:51.577571 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:55:52 crc kubenswrapper[4834]: I0121 14:55:52.316416 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.088531 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s2ffx"] Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.091806 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.108543 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2ffx"] Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.236997 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-utilities\") pod \"community-operators-s2ffx\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.237162 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmrqw\" (UniqueName: \"kubernetes.io/projected/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-kube-api-access-lmrqw\") pod \"community-operators-s2ffx\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.237255 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-catalog-content\") pod \"community-operators-s2ffx\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.340070 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-catalog-content\") pod \"community-operators-s2ffx\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.340271 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-utilities\") pod \"community-operators-s2ffx\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.340720 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-catalog-content\") pod \"community-operators-s2ffx\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.340961 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-utilities\") pod \"community-operators-s2ffx\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.342302 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmrqw\" (UniqueName: \"kubernetes.io/projected/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-kube-api-access-lmrqw\") pod \"community-operators-s2ffx\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.365008 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmrqw\" (UniqueName: \"kubernetes.io/projected/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-kube-api-access-lmrqw\") pod \"community-operators-s2ffx\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:07 crc kubenswrapper[4834]: I0121 14:56:07.435010 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:08 crc kubenswrapper[4834]: I0121 14:56:08.049340 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2ffx"] Jan 21 14:56:08 crc kubenswrapper[4834]: I0121 14:56:08.468303 4834 generic.go:334] "Generic (PLEG): container finished" podID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerID="94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c" exitCode=0 Jan 21 14:56:08 crc kubenswrapper[4834]: I0121 14:56:08.468404 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ffx" event={"ID":"3bd802b1-4bcc-4604-a82e-5e84a0f0338e","Type":"ContainerDied","Data":"94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c"} Jan 21 14:56:08 crc kubenswrapper[4834]: I0121 14:56:08.468630 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ffx" event={"ID":"3bd802b1-4bcc-4604-a82e-5e84a0f0338e","Type":"ContainerStarted","Data":"a7eb0b76b29c662abe7458c9436b1ced70300eea382d8639c019cae4f7113266"} Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.432042 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1e5f-account-create-update-sm488"] Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.434034 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.446712 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.467511 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fxzqd"] Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.469777 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.484163 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.499816 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts\") pod \"root-account-create-update-fxzqd\" (UID: \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\") " pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.499897 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-operator-scripts\") pod \"cinder-1e5f-account-create-update-sm488\" (UID: \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\") " pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.499919 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht6kr\" (UniqueName: \"kubernetes.io/projected/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-kube-api-access-ht6kr\") pod \"root-account-create-update-fxzqd\" (UID: \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\") " pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.500022 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7clw\" (UniqueName: \"kubernetes.io/projected/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-kube-api-access-p7clw\") pod \"cinder-1e5f-account-create-update-sm488\" (UID: \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\") " pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.513768 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1e5f-account-create-update-sm488"] Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.534058 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fxzqd"] Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.552962 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1e5f-account-create-update-p7tv4"] Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.600803 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1e5f-account-create-update-p7tv4"] Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.601711 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-operator-scripts\") pod \"cinder-1e5f-account-create-update-sm488\" (UID: \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\") " pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.601749 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht6kr\" (UniqueName: \"kubernetes.io/projected/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-kube-api-access-ht6kr\") pod \"root-account-create-update-fxzqd\" (UID: \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\") " pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.601845 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7clw\" (UniqueName: \"kubernetes.io/projected/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-kube-api-access-p7clw\") pod \"cinder-1e5f-account-create-update-sm488\" (UID: \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\") " pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.601951 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts\") pod \"root-account-create-update-fxzqd\" (UID: \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\") " pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.602881 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts\") pod \"root-account-create-update-fxzqd\" (UID: \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\") " pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.603563 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-operator-scripts\") pod \"cinder-1e5f-account-create-update-sm488\" (UID: \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\") " pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.625376 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9ca3-account-create-update-prg2p"] Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.627250 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.648572 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.703561 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7clw\" (UniqueName: \"kubernetes.io/projected/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-kube-api-access-p7clw\") pod \"cinder-1e5f-account-create-update-sm488\" (UID: \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\") " pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.863736 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f811e2a-291e-401d-9b0f-32146bad80ac-operator-scripts\") pod \"glance-9ca3-account-create-update-prg2p\" (UID: \"8f811e2a-291e-401d-9b0f-32146bad80ac\") " pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.864339 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvjjf\" (UniqueName: \"kubernetes.io/projected/8f811e2a-291e-401d-9b0f-32146bad80ac-kube-api-access-fvjjf\") pod \"glance-9ca3-account-create-update-prg2p\" (UID: \"8f811e2a-291e-401d-9b0f-32146bad80ac\") " pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.915063 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht6kr\" (UniqueName: \"kubernetes.io/projected/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-kube-api-access-ht6kr\") pod \"root-account-create-update-fxzqd\" (UID: \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\") " pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.924622 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.938066 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.966353 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvjjf\" (UniqueName: \"kubernetes.io/projected/8f811e2a-291e-401d-9b0f-32146bad80ac-kube-api-access-fvjjf\") pod \"glance-9ca3-account-create-update-prg2p\" (UID: \"8f811e2a-291e-401d-9b0f-32146bad80ac\") " pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.969660 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f811e2a-291e-401d-9b0f-32146bad80ac-operator-scripts\") pod \"glance-9ca3-account-create-update-prg2p\" (UID: \"8f811e2a-291e-401d-9b0f-32146bad80ac\") " pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:09 crc kubenswrapper[4834]: I0121 14:56:09.966481 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f811e2a-291e-401d-9b0f-32146bad80ac-operator-scripts\") pod \"glance-9ca3-account-create-update-prg2p\" (UID: \"8f811e2a-291e-401d-9b0f-32146bad80ac\") " pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.064446 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9ca3-account-create-update-prg2p"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.064897 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.078771 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvjjf\" (UniqueName: \"kubernetes.io/projected/8f811e2a-291e-401d-9b0f-32146bad80ac-kube-api-access-fvjjf\") pod \"glance-9ca3-account-create-update-prg2p\" (UID: \"8f811e2a-291e-401d-9b0f-32146bad80ac\") " pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:10 crc kubenswrapper[4834]: E0121 14:56:10.079945 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:10 crc kubenswrapper[4834]: E0121 14:56:10.080010 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data podName:b87b73b4-2715-4ce7-81b3-df0c1f57922f nodeName:}" failed. No retries permitted until 2026-01-21 14:56:10.57999207 +0000 UTC m=+1516.554341115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data") pod "rabbitmq-cell1-server-0" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.091997 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.092378 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" containerName="openstackclient" containerID="cri-o://d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830" gracePeriod=2 Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.135820 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.187865 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mjhbn"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.201045 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mjhbn"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.239639 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9ca3-account-create-update-57lcl"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.260570 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9ca3-account-create-update-57lcl"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.274485 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.286104 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.287249 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerName="openstack-network-exporter" containerID="cri-o://a73cfecc848bf12c70aa981c92c2bea59b70ce49b289daf7e8fd02a4758aca8a" gracePeriod=300 Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.310062 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bc05-account-create-update-sxz6w"] Jan 21 14:56:10 crc kubenswrapper[4834]: E0121 14:56:10.310804 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" containerName="openstackclient" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.310822 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" containerName="openstackclient" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.311037 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" containerName="openstackclient" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.311970 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.320046 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.371650 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c461d8-2b64-4737-b0df-da4bddde822d" path="/var/lib/kubelet/pods/57c461d8-2b64-4737-b0df-da4bddde822d/volumes" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.372390 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db280fdf-08e2-4c0b-bc56-535c8a85be1a" path="/var/lib/kubelet/pods/db280fdf-08e2-4c0b-bc56-535c8a85be1a/volumes" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.373074 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc890a7e-8c34-46c9-ae49-3e5117149f34" path="/var/lib/kubelet/pods/dc890a7e-8c34-46c9-ae49-3e5117149f34/volumes" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.373977 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.374426 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerName="openstack-network-exporter" containerID="cri-o://1a272bf3dc5a9407cead8d9f3ebf4fd78348c18107084c4a2acf29f42c67cd6a" gracePeriod=300 Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.377029 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bc05-account-create-update-sxz6w"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.396229 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-operator-scripts\") pod \"neutron-bc05-account-create-update-sxz6w\" (UID: \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\") " pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.396345 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tlgq\" (UniqueName: \"kubernetes.io/projected/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-kube-api-access-5tlgq\") pod \"neutron-bc05-account-create-update-sxz6w\" (UID: \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\") " pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.403311 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bc05-account-create-update-l986q"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.414783 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="73b312a8-0dee-488f-b998-4653b1cce8be" containerName="galera" probeResult="failure" output="command timed out" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.468963 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bc05-account-create-update-l986q"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.506423 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-operator-scripts\") pod \"neutron-bc05-account-create-update-sxz6w\" (UID: \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\") " pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.506549 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tlgq\" (UniqueName: \"kubernetes.io/projected/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-kube-api-access-5tlgq\") pod \"neutron-bc05-account-create-update-sxz6w\" (UID: \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\") " pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.508155 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-operator-scripts\") pod \"neutron-bc05-account-create-update-sxz6w\" (UID: \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\") " pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.518014 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.536046 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="73b312a8-0dee-488f-b998-4653b1cce8be" containerName="galera" probeResult="failure" output="command timed out" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.565791 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.566200 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="ovn-northd" containerID="cri-o://65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" gracePeriod=30 Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.566809 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="openstack-network-exporter" containerID="cri-o://fa01f639bc91875c2d0bcb559a386c138421ca21854b35458ce844de676ca39a" gracePeriod=30 Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.608982 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-722hs"] Jan 21 14:56:10 crc kubenswrapper[4834]: E0121 14:56:10.618633 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:10 crc kubenswrapper[4834]: E0121 14:56:10.618728 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data podName:b87b73b4-2715-4ce7-81b3-df0c1f57922f nodeName:}" failed. No retries permitted until 2026-01-21 14:56:11.618707581 +0000 UTC m=+1517.593056626 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data") pod "rabbitmq-cell1-server-0" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.655310 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tlgq\" (UniqueName: \"kubernetes.io/projected/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-kube-api-access-5tlgq\") pod \"neutron-bc05-account-create-update-sxz6w\" (UID: \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\") " pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.689000 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-722hs"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.703684 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.724466 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1045-account-create-update-7ssd5"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.726070 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.734938 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ffx" event={"ID":"3bd802b1-4bcc-4604-a82e-5e84a0f0338e","Type":"ContainerStarted","Data":"0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64"} Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.749329 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.788297 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1045-account-create-update-7ssd5"] Jan 21 14:56:10 crc kubenswrapper[4834]: E0121 14:56:10.828535 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:56:10 crc kubenswrapper[4834]: E0121 14:56:10.828603 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data podName:df9714a2-fadf-48a3-8b71-07d7419cc713 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:11.328580795 +0000 UTC m=+1517.302929830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data") pod "rabbitmq-server-0" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713") : configmap "rabbitmq-config-data" not found Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.884529 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-x8ctw"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.904177 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1045-account-create-update-dd95j"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.906075 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1045-account-create-update-dd95j"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.919864 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-12d4-account-create-update-qnb6b"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.938338 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-12d4-account-create-update-qnb6b"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.939575 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-operator-scripts\") pod \"nova-api-1045-account-create-update-7ssd5\" (UID: \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\") " pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.939777 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nld7z\" (UniqueName: \"kubernetes.io/projected/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-kube-api-access-nld7z\") pod \"nova-api-1045-account-create-update-7ssd5\" (UID: \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\") " pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.940364 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-x8ctw"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.952907 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rkw29"] Jan 21 14:56:10 crc kubenswrapper[4834]: I0121 14:56:10.972147 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerName="ovsdbserver-nb" containerID="cri-o://3e35deec45ae37578385bbd09aa3545fddecd1f4ec1244e186609259be21731f" gracePeriod=300 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.048363 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nld7z\" (UniqueName: \"kubernetes.io/projected/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-kube-api-access-nld7z\") pod \"nova-api-1045-account-create-update-7ssd5\" (UID: \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\") " pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.048427 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-operator-scripts\") pod \"nova-api-1045-account-create-update-7ssd5\" (UID: \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\") " pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.049493 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-operator-scripts\") pod \"nova-api-1045-account-create-update-7ssd5\" (UID: \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\") " pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.083147 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rkw29"] Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.119918 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3e35deec45ae37578385bbd09aa3545fddecd1f4ec1244e186609259be21731f" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.136632 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3e35deec45ae37578385bbd09aa3545fddecd1f4ec1244e186609259be21731f" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.147800 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3e35deec45ae37578385bbd09aa3545fddecd1f4ec1244e186609259be21731f" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.147892 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerName="ovsdbserver-nb" Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.170771 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nld7z\" (UniqueName: \"kubernetes.io/projected/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-kube-api-access-nld7z\") pod \"nova-api-1045-account-create-update-7ssd5\" (UID: \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\") " pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.173007 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-73cb-account-create-update-qghqp"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.242310 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-73cb-account-create-update-qghqp"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.250679 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-5nwpj"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.280202 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-4nq5q"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.300671 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-5nwpj"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.308470 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-4nq5q"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.326193 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9wtcs"] Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.357128 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.360177 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-xhsvz"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.360520 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-xhsvz" podUID="04a45f24-7164-403c-954f-5ff46c148c5a" containerName="openstack-network-exporter" containerID="cri-o://e3bb8d41e0b523f5b37a3567db174269b4e1271905c76b1d4dc36d52baf6dbf3" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.375184 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.376901 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.377001 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data podName:df9714a2-fadf-48a3-8b71-07d7419cc713 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:12.376980377 +0000 UTC m=+1518.351329422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data") pod "rabbitmq-server-0" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713") : configmap "rabbitmq-config-data" not found Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.379224 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-ztq6r"] Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.379724 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.379780 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="ovn-northd" Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.422831 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snm6f"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.424660 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" podUID="950459bc-faae-4448-8cf7-289275204041" containerName="dnsmasq-dns" containerID="cri-o://83e77c488c0398699d529ac8576e497423799fcbdb4b72fb82babb210f1861cb" gracePeriod=10 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.445661 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bmr8w"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.461303 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bmr8w"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.475346 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.476773 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-server" containerID="cri-o://631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477315 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="swift-recon-cron" containerID="cri-o://3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477364 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="rsync" containerID="cri-o://fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477403 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-expirer" containerID="cri-o://16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477440 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-updater" containerID="cri-o://537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477481 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-auditor" containerID="cri-o://7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477513 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-replicator" containerID="cri-o://9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477548 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-server" containerID="cri-o://fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477589 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-updater" containerID="cri-o://b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477641 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-auditor" containerID="cri-o://6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477677 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-replicator" containerID="cri-o://a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477720 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-server" containerID="cri-o://df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477762 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-reaper" containerID="cri-o://0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477846 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-auditor" containerID="cri-o://f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.477887 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-replicator" containerID="cri-o://5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.485424 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69bb684bc8-6s7qv"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.486703 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69bb684bc8-6s7qv" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-log" containerID="cri-o://637192b126c7873039ac5b1bfa43915a4f297b8c2c14190ae08138d21215676f" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.486893 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69bb684bc8-6s7qv" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-api" containerID="cri-o://0b54aa54647967f433ead26be60f5ecca8009300967e54ee0684394f6f2cdd0b" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.580795 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2mgk7"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.598014 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8r9rw"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.628204 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerName="ovsdbserver-sb" containerID="cri-o://fc16a0399ebec76c3711f257960965a75e78472dd08cb4f571245e6cd9a2da01" gracePeriod=299 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.634578 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.634885 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1e74faea-a792-455c-a253-7012f98c6acf" containerName="cinder-scheduler" containerID="cri-o://db044be1aefe255c10ce8baeb7d8226a8e00d8f2d732191c49cc1dce3a593cd6" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.635405 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1e74faea-a792-455c-a253-7012f98c6acf" containerName="probe" containerID="cri-o://83df69fcbb26d2aebb7daf416be409e283ad79ee46ccc601f9324e32b0922177" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.667312 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.667733 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerName="cinder-api-log" containerID="cri-o://7939f32b333f3aaa2768f5185ab1a75ab0e7c265ea2a3d0ff3990714123a7f14" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: I0121 14:56:11.667788 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerName="cinder-api" containerID="cri-o://831f7e80b2dd51f5fba704c0975d18b653c249edfc68646cb8a57e93f78ca51e" gracePeriod=30 Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.682954 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:11 crc kubenswrapper[4834]: E0121 14:56:11.683015 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data podName:b87b73b4-2715-4ce7-81b3-df0c1f57922f nodeName:}" failed. No retries permitted until 2026-01-21 14:56:13.683001335 +0000 UTC m=+1519.657350380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data") pod "rabbitmq-cell1-server-0" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.713339 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8r9rw"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.730579 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2mgk7"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.753343 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6ed4-account-create-update-nplb4"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.772242 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6ed4-account-create-update-nplb4"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.797578 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.798590 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerName="glance-httpd" containerID="cri-o://2ebbbaf00d9a369dd1ca5f46d2f165bdb5bf4264e991c134e7c3cd8817356d6f" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.798741 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerName="glance-log" containerID="cri-o://42b08d33f6e569457d53d7d0fb1dde4b71a0fbe929495ac2886bed4d36c39ea2" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.829043 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-cv77m"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.837809 4834 generic.go:334] "Generic (PLEG): container finished" podID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerID="0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.838188 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ffx" event={"ID":"3bd802b1-4bcc-4604-a82e-5e84a0f0338e","Type":"ContainerDied","Data":"0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.846016 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-cv77m"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.851538 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_869db5eb-b0d3-407e-a28b-1d23b27a0299/ovsdbserver-sb/0.log" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.851593 4834 generic.go:334] "Generic (PLEG): container finished" podID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerID="1a272bf3dc5a9407cead8d9f3ebf4fd78348c18107084c4a2acf29f42c67cd6a" exitCode=2 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.851617 4834 generic.go:334] "Generic (PLEG): container finished" podID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerID="fc16a0399ebec76c3711f257960965a75e78472dd08cb4f571245e6cd9a2da01" exitCode=143 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.851704 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"869db5eb-b0d3-407e-a28b-1d23b27a0299","Type":"ContainerDied","Data":"1a272bf3dc5a9407cead8d9f3ebf4fd78348c18107084c4a2acf29f42c67cd6a"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.851740 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"869db5eb-b0d3-407e-a28b-1d23b27a0299","Type":"ContainerDied","Data":"fc16a0399ebec76c3711f257960965a75e78472dd08cb4f571245e6cd9a2da01"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.874808 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-87996dbdf-vzvsk"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.875301 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-87996dbdf-vzvsk" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-api" containerID="cri-o://5021b65d7845ef850fa0bbc7c486df5fe601a95715feb8020d57c8b3cb5c5c7e" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.876904 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-87996dbdf-vzvsk" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-httpd" containerID="cri-o://0753d72f76c2ebcc2cbf477c10a770f53df938062ba6df9b3f55fe8788125c99" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.894744 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4whjk"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.905994 4834 generic.go:334] "Generic (PLEG): container finished" podID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerID="fa01f639bc91875c2d0bcb559a386c138421ca21854b35458ce844de676ca39a" exitCode=2 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.906083 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"afa0d119-4c43-4161-8e43-94de0b186cb8","Type":"ContainerDied","Data":"fa01f639bc91875c2d0bcb559a386c138421ca21854b35458ce844de676ca39a"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.913636 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xhsvz_04a45f24-7164-403c-954f-5ff46c148c5a/openstack-network-exporter/0.log" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.913681 4834 generic.go:334] "Generic (PLEG): container finished" podID="04a45f24-7164-403c-954f-5ff46c148c5a" containerID="e3bb8d41e0b523f5b37a3567db174269b4e1271905c76b1d4dc36d52baf6dbf3" exitCode=2 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.913745 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.913777 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xhsvz" event={"ID":"04a45f24-7164-403c-954f-5ff46c148c5a","Type":"ContainerDied","Data":"e3bb8d41e0b523f5b37a3567db174269b4e1271905c76b1d4dc36d52baf6dbf3"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.929384 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" podUID="950459bc-faae-4448-8cf7-289275204041" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: connect: connection refused" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.931442 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4whjk"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.945586 4834 generic.go:334] "Generic (PLEG): container finished" podID="950459bc-faae-4448-8cf7-289275204041" containerID="83e77c488c0398699d529ac8576e497423799fcbdb4b72fb82babb210f1861cb" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.945665 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" event={"ID":"950459bc-faae-4448-8cf7-289275204041","Type":"ContainerDied","Data":"83e77c488c0398699d529ac8576e497423799fcbdb4b72fb82babb210f1861cb"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.950329 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.950666 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerName="glance-log" containerID="cri-o://b12a6f3114054a9898093e473e21761dd4f9e13b5dde59cc5ef5df0092d0285d" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.950853 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerName="glance-httpd" containerID="cri-o://4cdf01d893881ac1c03282850be08114203e3678d346c96a0db7e061046cf925" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.955912 4834 generic.go:334] "Generic (PLEG): container finished" podID="46ef0752-abe1-465f-8b0b-77906b861c12" containerID="637192b126c7873039ac5b1bfa43915a4f297b8c2c14190ae08138d21215676f" exitCode=143 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:11.956043 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69bb684bc8-6s7qv" event={"ID":"46ef0752-abe1-465f-8b0b-77906b861c12","Type":"ContainerDied","Data":"637192b126c7873039ac5b1bfa43915a4f297b8c2c14190ae08138d21215676f"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.019748 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-sw7v4"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.046933 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-sw7v4"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066030 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066080 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066090 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066119 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066133 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066142 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398" exitCode=0 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066214 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066276 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066293 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066306 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066317 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.066344 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.085153 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.100123 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9ca3-account-create-update-prg2p"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.102774 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_972527b7-5fbf-4cb1-9495-155dd778bba6/ovsdbserver-nb/0.log" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.102817 4834 generic.go:334] "Generic (PLEG): container finished" podID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerID="a73cfecc848bf12c70aa981c92c2bea59b70ce49b289daf7e8fd02a4758aca8a" exitCode=2 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.102835 4834 generic.go:334] "Generic (PLEG): container finished" podID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerID="3e35deec45ae37578385bbd09aa3545fddecd1f4ec1244e186609259be21731f" exitCode=143 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.102857 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"972527b7-5fbf-4cb1-9495-155dd778bba6","Type":"ContainerDied","Data":"a73cfecc848bf12c70aa981c92c2bea59b70ce49b289daf7e8fd02a4758aca8a"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.102883 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"972527b7-5fbf-4cb1-9495-155dd778bba6","Type":"ContainerDied","Data":"3e35deec45ae37578385bbd09aa3545fddecd1f4ec1244e186609259be21731f"} Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.129865 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1e5f-account-create-update-sm488"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.140787 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.141306 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-log" containerID="cri-o://dfd1d1f4b01cbe2d02e24b2d79800e43e22f1f1484f9243a7619cca4445cac51" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.142038 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-api" containerID="cri-o://008e24c6de4a7972abcecfe67f07648df9fa7fff4e253229a970cbbe16f3e832" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.172463 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-sfvnk"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.189226 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" containerName="rabbitmq" containerID="cri-o://74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf" gracePeriod=604800 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.211807 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-sfvnk"] Jan 21 14:56:12 crc kubenswrapper[4834]: E0121 14:56:12.230835 4834 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 21 14:56:12 crc kubenswrapper[4834]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 14:56:12 crc kubenswrapper[4834]: + source /usr/local/bin/container-scripts/functions Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNBridge=br-int Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNRemote=tcp:localhost:6642 Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNEncapType=geneve Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNAvailabilityZones= Jan 21 14:56:12 crc kubenswrapper[4834]: ++ EnableChassisAsGateway=true Jan 21 14:56:12 crc kubenswrapper[4834]: ++ PhysicalNetworks= Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNHostName= Jan 21 14:56:12 crc kubenswrapper[4834]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 14:56:12 crc kubenswrapper[4834]: ++ ovs_dir=/var/lib/openvswitch Jan 21 14:56:12 crc kubenswrapper[4834]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 14:56:12 crc kubenswrapper[4834]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 14:56:12 crc kubenswrapper[4834]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:56:12 crc kubenswrapper[4834]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:56:12 crc kubenswrapper[4834]: + sleep 0.5 Jan 21 14:56:12 crc kubenswrapper[4834]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:56:12 crc kubenswrapper[4834]: + cleanup_ovsdb_server_semaphore Jan 21 14:56:12 crc kubenswrapper[4834]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:56:12 crc kubenswrapper[4834]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 14:56:12 crc kubenswrapper[4834]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-ztq6r" message=< Jan 21 14:56:12 crc kubenswrapper[4834]: Exiting ovsdb-server (5) [ OK ] Jan 21 14:56:12 crc kubenswrapper[4834]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 14:56:12 crc kubenswrapper[4834]: + source /usr/local/bin/container-scripts/functions Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNBridge=br-int Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNRemote=tcp:localhost:6642 Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNEncapType=geneve Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNAvailabilityZones= Jan 21 14:56:12 crc kubenswrapper[4834]: ++ EnableChassisAsGateway=true Jan 21 14:56:12 crc kubenswrapper[4834]: ++ PhysicalNetworks= Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNHostName= Jan 21 14:56:12 crc kubenswrapper[4834]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 14:56:12 crc kubenswrapper[4834]: ++ ovs_dir=/var/lib/openvswitch Jan 21 14:56:12 crc kubenswrapper[4834]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 14:56:12 crc kubenswrapper[4834]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 14:56:12 crc kubenswrapper[4834]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:56:12 crc kubenswrapper[4834]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:56:12 crc kubenswrapper[4834]: + sleep 0.5 Jan 21 14:56:12 crc kubenswrapper[4834]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:56:12 crc kubenswrapper[4834]: + cleanup_ovsdb_server_semaphore Jan 21 14:56:12 crc kubenswrapper[4834]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:56:12 crc kubenswrapper[4834]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 14:56:12 crc kubenswrapper[4834]: > Jan 21 14:56:12 crc kubenswrapper[4834]: E0121 14:56:12.230884 4834 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 21 14:56:12 crc kubenswrapper[4834]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 21 14:56:12 crc kubenswrapper[4834]: + source /usr/local/bin/container-scripts/functions Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNBridge=br-int Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNRemote=tcp:localhost:6642 Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNEncapType=geneve Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNAvailabilityZones= Jan 21 14:56:12 crc kubenswrapper[4834]: ++ EnableChassisAsGateway=true Jan 21 14:56:12 crc kubenswrapper[4834]: ++ PhysicalNetworks= Jan 21 14:56:12 crc kubenswrapper[4834]: ++ OVNHostName= Jan 21 14:56:12 crc kubenswrapper[4834]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 21 14:56:12 crc kubenswrapper[4834]: ++ ovs_dir=/var/lib/openvswitch Jan 21 14:56:12 crc kubenswrapper[4834]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 21 14:56:12 crc kubenswrapper[4834]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 21 14:56:12 crc kubenswrapper[4834]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:56:12 crc kubenswrapper[4834]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:56:12 crc kubenswrapper[4834]: + sleep 0.5 Jan 21 14:56:12 crc kubenswrapper[4834]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 21 14:56:12 crc kubenswrapper[4834]: + cleanup_ovsdb_server_semaphore Jan 21 14:56:12 crc kubenswrapper[4834]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 21 14:56:12 crc kubenswrapper[4834]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 21 14:56:12 crc kubenswrapper[4834]: > pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" containerID="cri-o://71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.230959 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" containerID="cri-o://71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.243626 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5696b4bbb9-8l4r8"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.252784 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" podUID="84309501-c399-4d83-9876-00b58ba67b0d" containerName="barbican-worker-log" containerID="cri-o://9bf4c21b00ddd9eebc7d0010a2d80343120a83ac8f09fe51905c76215b321ac0" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.253724 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" podUID="84309501-c399-4d83-9876-00b58ba67b0d" containerName="barbican-worker" containerID="cri-o://b0fceacb1c47e8063d1e7c22118f445dc2ab5d3565c665ac3bb3f6be80d3738d" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.257409 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-fsjzd"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.314079 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5d5d49578b-z9xbl"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.314341 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" podUID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerName="barbican-keystone-listener-log" containerID="cri-o://e22110aadbd884384da0840ce5e19cbdcf2bf4d1b9fffe3c6126085fdedeb6df" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.315168 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" podUID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerName="barbican-keystone-listener" containerID="cri-o://52bc71d8458ec442f5c03fd42e72e4aee5dfcd1add1d1cd4d9a3fbb418f22b46" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.380099 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" containerID="cri-o://bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" gracePeriod=29 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.392507 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123d466f-e93e-478a-8732-465c6099201b" path="/var/lib/kubelet/pods/123d466f-e93e-478a-8732-465c6099201b/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.393491 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13827989-07c5-4417-9be2-574fbca9ddbb" path="/var/lib/kubelet/pods/13827989-07c5-4417-9be2-574fbca9ddbb/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.394289 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1491d4f8-0e00-406a-8e91-51d3dc0e5a68" path="/var/lib/kubelet/pods/1491d4f8-0e00-406a-8e91-51d3dc0e5a68/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.397460 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2907abf2-1f9d-497d-bfb3-bf4094e7c174" path="/var/lib/kubelet/pods/2907abf2-1f9d-497d-bfb3-bf4094e7c174/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.398297 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a2958c1-85bd-45a4-962f-5af74b8b2896" path="/var/lib/kubelet/pods/3a2958c1-85bd-45a4-962f-5af74b8b2896/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.399066 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed348d9-9d38-4546-a839-0930def4c9f3" path="/var/lib/kubelet/pods/3ed348d9-9d38-4546-a839-0930def4c9f3/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.401487 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4734873a-25bb-46e2-bb5b-68f9cd776682" path="/var/lib/kubelet/pods/4734873a-25bb-46e2-bb5b-68f9cd776682/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.402295 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b" path="/var/lib/kubelet/pods/5a04fabb-f3bc-4a9f-a3c6-2e6eb4f0b74b/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.402881 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c79cc29-09cc-4649-815d-8e5ea52e05c9" path="/var/lib/kubelet/pods/5c79cc29-09cc-4649-815d-8e5ea52e05c9/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.403512 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692edde8-3448-4e1f-8996-b301c823e43d" path="/var/lib/kubelet/pods/692edde8-3448-4e1f-8996-b301c823e43d/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: E0121 14:56:12.403694 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bd802b1_4bcc_4604_a82e_5e84a0f0338e.slice/crio-0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bd802b1_4bcc_4604_a82e_5e84a0f0338e.slice/crio-conmon-0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod835da3fd_0497_4072_9d76_122d19300787.slice/crio-conmon-5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod972527b7_5fbf_4cb1_9495_155dd778bba6.slice/crio-conmon-a73cfecc848bf12c70aa981c92c2bea59b70ce49b289daf7e8fd02a4758aca8a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod869db5eb_b0d3_407e_a28b_1d23b27a0299.slice/crio-fc16a0399ebec76c3711f257960965a75e78472dd08cb4f571245e6cd9a2da01.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84309501_c399_4d83_9876_00b58ba67b0d.slice/crio-9bf4c21b00ddd9eebc7d0010a2d80343120a83ac8f09fe51905c76215b321ac0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod972527b7_5fbf_4cb1_9495_155dd778bba6.slice/crio-conmon-3e35deec45ae37578385bbd09aa3545fddecd1f4ec1244e186609259be21731f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f229152_a987_497e_8777_937b4f6880d0.slice/crio-conmon-71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod950459bc_faae_4448_8cf7_289275204041.slice/crio-conmon-83e77c488c0398699d529ac8576e497423799fcbdb4b72fb82babb210f1861cb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod835da3fd_0497_4072_9d76_122d19300787.slice/crio-9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f229152_a987_497e_8777_937b4f6880d0.slice/crio-71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod507328e4_20c4_4e84_b781_e4889419607e.slice/crio-0753d72f76c2ebcc2cbf477c10a770f53df938062ba6df9b3f55fe8788125c99.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08e8a6dd_5bbd_4f91_9860_2b3146ba47a2.slice/crio-b12a6f3114054a9898093e473e21761dd4f9e13b5dde59cc5ef5df0092d0285d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod835da3fd_0497_4072_9d76_122d19300787.slice/crio-conmon-df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod835da3fd_0497_4072_9d76_122d19300787.slice/crio-conmon-fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod835da3fd_0497_4072_9d76_122d19300787.slice/crio-fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693.scope\": RecentStats: unable to find data in memory cache]" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.404795 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a339279-6805-4bb2-9f82-c2549a8a695f" path="/var/lib/kubelet/pods/6a339279-6805-4bb2-9f82-c2549a8a695f/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.405443 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8becb166-563c-43ec-8d07-567f51c39d64" path="/var/lib/kubelet/pods/8becb166-563c-43ec-8d07-567f51c39d64/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.406121 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5abd5d5-addd-4b84-a301-86a55a7e23cf" path="/var/lib/kubelet/pods/b5abd5d5-addd-4b84-a301-86a55a7e23cf/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.411537 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a" path="/var/lib/kubelet/pods/ba1c33e8-60d6-4e37-9fc0-d9ba6e13901a/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.413792 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4abd259-e33a-45f5-bf8a-88c9828b4877" path="/var/lib/kubelet/pods/d4abd259-e33a-45f5-bf8a-88c9828b4877/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.420386 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4069528-b187-472b-a3b0-fa87693b4626" path="/var/lib/kubelet/pods/e4069528-b187-472b-a3b0-fa87693b4626/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.422339 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee9e0a88-3e00-4c83-860f-25a0d932f773" path="/var/lib/kubelet/pods/ee9e0a88-3e00-4c83-860f-25a0d932f773/volumes" Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.424598 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.424652 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-fsjzd"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.424678 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-x6zbx"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.426062 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-log" containerID="cri-o://1c0cc5a6d21ee15bd2b37944b66a859d2a28a82ee9266a7e79976bf6ab2c55a8" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.426879 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-metadata" containerID="cri-o://85378de999d4b746f726db38c15173ce2ec1d4f11248e24fa1ad5e385ff441aa" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: E0121 14:56:12.427775 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:56:12 crc kubenswrapper[4834]: E0121 14:56:12.427844 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data podName:df9714a2-fadf-48a3-8b71-07d7419cc713 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:14.427825852 +0000 UTC m=+1520.402174897 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data") pod "rabbitmq-server-0" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713") : configmap "rabbitmq-config-data" not found Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.431004 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-x6zbx"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.441011 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-trz62"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.451144 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-trz62"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.465799 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f6cfdb85b-jvqw8"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.466189 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api-log" containerID="cri-o://d0842ea9c1eb4c9eb1b4ebac20aeac250fca82a332954d0e335a267a9f99c8f0" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.466546 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api" containerID="cri-o://19ad5e1d36dc4cca92dc2d55710cdc46c1e446991cfdd3c7f69a92b2d721c0f8" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.477700 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8d6d-account-create-update-gp9sw"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.486578 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-qxnvw"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.503073 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-qxnvw"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.515587 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8d6d-account-create-update-gp9sw"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.523391 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bc05-account-create-update-sxz6w"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.531375 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.533201 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="15dc9d10-a46a-4fec-b061-2e72caace933" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.536735 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="73b312a8-0dee-488f-b998-4653b1cce8be" containerName="galera" containerID="cri-o://939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.557467 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1045-account-create-update-7ssd5"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.597557 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.705102 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.705492 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e9503bd6-1084-408a-8e1d-65d66dab4170" containerName="nova-scheduler-scheduler" containerID="cri-o://8d770e00807ea81d0baadc2453ac237e13a01ff3a0bba99f60d58f1c729e3861" gracePeriod=30 Jan 21 14:56:12 crc kubenswrapper[4834]: I0121 14:56:12.779616 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" containerName="rabbitmq" containerID="cri-o://41b8202e62174a8eda17f1a9b9dd2a9295f09268d93892ad31cfad9446e70c71" gracePeriod=604800 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.089755 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="15dc9d10-a46a-4fec-b061-2e72caace933" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.198:6080/vnc_lite.html\": dial tcp 10.217.0.198:6080: connect: connection refused" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.090492 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jjlh"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.115454 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.123056 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jjlh"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.132691 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.132941 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="07ff4f13-b754-4f82-accc-54ed420dce2e" containerName="nova-cell1-conductor-conductor" containerID="cri-o://9cc4e607542ea15206bb43978347800db816a791a94de96842f843bffbc0cd73" gracePeriod=30 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.160226 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lkvk9"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.166581 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.166924 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="61306868-aa06-4574-a568-b36b22fd6db6" containerName="nova-cell0-conductor-conductor" containerID="cri-o://537c85e6c54e2bb1768b64164ebd5a9c9403e16796d84252d7cce9d6047e6d4f" gracePeriod=30 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.169847 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xhsvz_04a45f24-7164-403c-954f-5ff46c148c5a/openstack-network-exporter/0.log" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.169917 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.172750 4834 generic.go:334] "Generic (PLEG): container finished" podID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerID="1c0cc5a6d21ee15bd2b37944b66a859d2a28a82ee9266a7e79976bf6ab2c55a8" exitCode=143 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.172843 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"615eb241-8fa5-4c76-b710-19a3bd65e0ac","Type":"ContainerDied","Data":"1c0cc5a6d21ee15bd2b37944b66a859d2a28a82ee9266a7e79976bf6ab2c55a8"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.177064 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lkvk9"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.193359 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_972527b7-5fbf-4cb1-9495-155dd778bba6/ovsdbserver-nb/0.log" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.193432 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.193623 4834 generic.go:334] "Generic (PLEG): container finished" podID="84309501-c399-4d83-9876-00b58ba67b0d" containerID="9bf4c21b00ddd9eebc7d0010a2d80343120a83ac8f09fe51905c76215b321ac0" exitCode=143 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.193667 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" event={"ID":"84309501-c399-4d83-9876-00b58ba67b0d","Type":"ContainerDied","Data":"9bf4c21b00ddd9eebc7d0010a2d80343120a83ac8f09fe51905c76215b321ac0"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.239604 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_869db5eb-b0d3-407e-a28b-1d23b27a0299/ovsdbserver-sb/0.log" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.239893 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.252994 4834 generic.go:334] "Generic (PLEG): container finished" podID="1e74faea-a792-455c-a253-7012f98c6acf" containerID="83df69fcbb26d2aebb7daf416be409e283ad79ee46ccc601f9324e32b0922177" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.253316 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1e74faea-a792-455c-a253-7012f98c6acf","Type":"ContainerDied","Data":"83df69fcbb26d2aebb7daf416be409e283ad79ee46ccc601f9324e32b0922177"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.256372 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.263475 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.295345 4834 generic.go:334] "Generic (PLEG): container finished" podID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerID="7939f32b333f3aaa2768f5185ab1a75ab0e7c265ea2a3d0ff3990714123a7f14" exitCode=143 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.295474 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f53e7c29-7c71-4dba-8c9f-2a9accc74294","Type":"ContainerDied","Data":"7939f32b333f3aaa2768f5185ab1a75ab0e7c265ea2a3d0ff3990714123a7f14"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.313710 4834 generic.go:334] "Generic (PLEG): container finished" podID="507328e4-20c4-4e84-b781-e4889419607e" containerID="0753d72f76c2ebcc2cbf477c10a770f53df938062ba6df9b3f55fe8788125c99" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.313868 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-87996dbdf-vzvsk" event={"ID":"507328e4-20c4-4e84-b781-e4889419607e","Type":"ContainerDied","Data":"0753d72f76c2ebcc2cbf477c10a770f53df938062ba6df9b3f55fe8788125c99"} Jan 21 14:56:13 crc kubenswrapper[4834]: E0121 14:56:13.333643 4834 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:56:13 crc kubenswrapper[4834]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: if [ -n "glance" ]; then Jan 21 14:56:13 crc kubenswrapper[4834]: GRANT_DATABASE="glance" Jan 21 14:56:13 crc kubenswrapper[4834]: else Jan 21 14:56:13 crc kubenswrapper[4834]: GRANT_DATABASE="*" Jan 21 14:56:13 crc kubenswrapper[4834]: fi Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: # going for maximum compatibility here: Jan 21 14:56:13 crc kubenswrapper[4834]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:56:13 crc kubenswrapper[4834]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:56:13 crc kubenswrapper[4834]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:56:13 crc kubenswrapper[4834]: # support updates Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:56:13 crc kubenswrapper[4834]: E0121 14:56:13.335735 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-9ca3-account-create-update-prg2p" podUID="8f811e2a-291e-401d-9b0f-32146bad80ac" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.349836 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-config\") pod \"972527b7-5fbf-4cb1-9495-155dd778bba6\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.349938 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-combined-ca-bundle\") pod \"972527b7-5fbf-4cb1-9495-155dd778bba6\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.349966 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovs-rundir\") pod \"04a45f24-7164-403c-954f-5ff46c148c5a\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.349997 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdb-rundir\") pod \"972527b7-5fbf-4cb1-9495-155dd778bba6\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350053 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdbserver-sb-tls-certs\") pod \"869db5eb-b0d3-407e-a28b-1d23b27a0299\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350113 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpvzs\" (UniqueName: \"kubernetes.io/projected/869db5eb-b0d3-407e-a28b-1d23b27a0299-kube-api-access-bpvzs\") pod \"869db5eb-b0d3-407e-a28b-1d23b27a0299\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350140 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-metrics-certs-tls-certs\") pod \"869db5eb-b0d3-407e-a28b-1d23b27a0299\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350160 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"869db5eb-b0d3-407e-a28b-1d23b27a0299\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350210 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdbserver-nb-tls-certs\") pod \"972527b7-5fbf-4cb1-9495-155dd778bba6\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350246 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr8br\" (UniqueName: \"kubernetes.io/projected/950459bc-faae-4448-8cf7-289275204041-kube-api-access-kr8br\") pod \"950459bc-faae-4448-8cf7-289275204041\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350278 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config\") pod \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350325 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-metrics-certs-tls-certs\") pod \"04a45f24-7164-403c-954f-5ff46c148c5a\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350358 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdb-rundir\") pod \"869db5eb-b0d3-407e-a28b-1d23b27a0299\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350393 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-combined-ca-bundle\") pod \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350422 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovn-rundir\") pod \"04a45f24-7164-403c-954f-5ff46c148c5a\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350468 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-config\") pod \"950459bc-faae-4448-8cf7-289275204041\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350493 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a45f24-7164-403c-954f-5ff46c148c5a-config\") pod \"04a45f24-7164-403c-954f-5ff46c148c5a\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350526 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config-secret\") pod \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350549 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-swift-storage-0\") pod \"950459bc-faae-4448-8cf7-289275204041\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350572 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-combined-ca-bundle\") pod \"869db5eb-b0d3-407e-a28b-1d23b27a0299\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350592 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhc9d\" (UniqueName: \"kubernetes.io/projected/972527b7-5fbf-4cb1-9495-155dd778bba6-kube-api-access-qhc9d\") pod \"972527b7-5fbf-4cb1-9495-155dd778bba6\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350630 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-scripts\") pod \"869db5eb-b0d3-407e-a28b-1d23b27a0299\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350665 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-combined-ca-bundle\") pod \"04a45f24-7164-403c-954f-5ff46c148c5a\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350703 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-metrics-certs-tls-certs\") pod \"972527b7-5fbf-4cb1-9495-155dd778bba6\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350773 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-scripts\") pod \"972527b7-5fbf-4cb1-9495-155dd778bba6\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.350823 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-nb\") pod \"950459bc-faae-4448-8cf7-289275204041\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.351899 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "869db5eb-b0d3-407e-a28b-1d23b27a0299" (UID: "869db5eb-b0d3-407e-a28b-1d23b27a0299"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.354281 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-scripts" (OuterVolumeSpecName: "scripts") pod "869db5eb-b0d3-407e-a28b-1d23b27a0299" (UID: "869db5eb-b0d3-407e-a28b-1d23b27a0299"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.363100 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-config\") pod \"869db5eb-b0d3-407e-a28b-1d23b27a0299\" (UID: \"869db5eb-b0d3-407e-a28b-1d23b27a0299\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.363218 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-sb\") pod \"950459bc-faae-4448-8cf7-289275204041\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.363258 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-svc\") pod \"950459bc-faae-4448-8cf7-289275204041\" (UID: \"950459bc-faae-4448-8cf7-289275204041\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.363326 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfwzt\" (UniqueName: \"kubernetes.io/projected/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-kube-api-access-lfwzt\") pod \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\" (UID: \"67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.363374 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"972527b7-5fbf-4cb1-9495-155dd778bba6\" (UID: \"972527b7-5fbf-4cb1-9495-155dd778bba6\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.363411 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlbmt\" (UniqueName: \"kubernetes.io/projected/04a45f24-7164-403c-954f-5ff46c148c5a-kube-api-access-nlbmt\") pod \"04a45f24-7164-403c-954f-5ff46c148c5a\" (UID: \"04a45f24-7164-403c-954f-5ff46c148c5a\") " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.370022 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950459bc-faae-4448-8cf7-289275204041-kube-api-access-kr8br" (OuterVolumeSpecName: "kube-api-access-kr8br") pod "950459bc-faae-4448-8cf7-289275204041" (UID: "950459bc-faae-4448-8cf7-289275204041"). InnerVolumeSpecName "kube-api-access-kr8br". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.376727 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869db5eb-b0d3-407e-a28b-1d23b27a0299-kube-api-access-bpvzs" (OuterVolumeSpecName: "kube-api-access-bpvzs") pod "869db5eb-b0d3-407e-a28b-1d23b27a0299" (UID: "869db5eb-b0d3-407e-a28b-1d23b27a0299"). InnerVolumeSpecName "kube-api-access-bpvzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.376804 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-config" (OuterVolumeSpecName: "config") pod "972527b7-5fbf-4cb1-9495-155dd778bba6" (UID: "972527b7-5fbf-4cb1-9495-155dd778bba6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.378879 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972527b7-5fbf-4cb1-9495-155dd778bba6-kube-api-access-qhc9d" (OuterVolumeSpecName: "kube-api-access-qhc9d") pod "972527b7-5fbf-4cb1-9495-155dd778bba6" (UID: "972527b7-5fbf-4cb1-9495-155dd778bba6"). InnerVolumeSpecName "kube-api-access-qhc9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.379045 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a45f24-7164-403c-954f-5ff46c148c5a-config" (OuterVolumeSpecName: "config") pod "04a45f24-7164-403c-954f-5ff46c148c5a" (UID: "04a45f24-7164-403c-954f-5ff46c148c5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.385624 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "04a45f24-7164-403c-954f-5ff46c148c5a" (UID: "04a45f24-7164-403c-954f-5ff46c148c5a"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.385863 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9ca3-account-create-update-prg2p"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.388242 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "972527b7-5fbf-4cb1-9495-155dd778bba6" (UID: "972527b7-5fbf-4cb1-9495-155dd778bba6"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.389249 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-scripts" (OuterVolumeSpecName: "scripts") pod "972527b7-5fbf-4cb1-9495-155dd778bba6" (UID: "972527b7-5fbf-4cb1-9495-155dd778bba6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.393578 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "04a45f24-7164-403c-954f-5ff46c148c5a" (UID: "04a45f24-7164-403c-954f-5ff46c148c5a"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.397206 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "869db5eb-b0d3-407e-a28b-1d23b27a0299" (UID: "869db5eb-b0d3-407e-a28b-1d23b27a0299"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.400726 4834 generic.go:334] "Generic (PLEG): container finished" podID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerID="42b08d33f6e569457d53d7d0fb1dde4b71a0fbe929495ac2886bed4d36c39ea2" exitCode=143 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.400837 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a471c86e-9e4a-4aba-848a-75aefa12c239","Type":"ContainerDied","Data":"42b08d33f6e569457d53d7d0fb1dde4b71a0fbe929495ac2886bed4d36c39ea2"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.415306 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "972527b7-5fbf-4cb1-9495-155dd778bba6" (UID: "972527b7-5fbf-4cb1-9495-155dd778bba6"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.418050 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-kube-api-access-lfwzt" (OuterVolumeSpecName: "kube-api-access-lfwzt") pod "67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" (UID: "67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2"). InnerVolumeSpecName "kube-api-access-lfwzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.432395 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-config" (OuterVolumeSpecName: "config") pod "869db5eb-b0d3-407e-a28b-1d23b27a0299" (UID: "869db5eb-b0d3-407e-a28b-1d23b27a0299"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.436790 4834 generic.go:334] "Generic (PLEG): container finished" podID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerID="d0842ea9c1eb4c9eb1b4ebac20aeac250fca82a332954d0e335a267a9f99c8f0" exitCode=143 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.436911 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" event={"ID":"4e4b8c88-31ca-4212-939c-9e163ff6af52","Type":"ContainerDied","Data":"d0842ea9c1eb4c9eb1b4ebac20aeac250fca82a332954d0e335a267a9f99c8f0"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.438559 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a45f24-7164-403c-954f-5ff46c148c5a-kube-api-access-nlbmt" (OuterVolumeSpecName: "kube-api-access-nlbmt") pod "04a45f24-7164-403c-954f-5ff46c148c5a" (UID: "04a45f24-7164-403c-954f-5ff46c148c5a"). InnerVolumeSpecName "kube-api-access-nlbmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.463059 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1e5f-account-create-update-sm488"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.467196 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.467232 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: W0121 14:56:13.485994 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9effd323_5ae3_4ed8_a2e0_4659dd231bbd.slice/crio-169b3fe624ac21635f273cad7f3e8ffc2ca34497db222d84f7c2641428255804 WatchSource:0}: Error finding container 169b3fe624ac21635f273cad7f3e8ffc2ca34497db222d84f7c2641428255804: Status 404 returned error can't find the container with id 169b3fe624ac21635f273cad7f3e8ffc2ca34497db222d84f7c2641428255804 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.487664 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_869db5eb-b0d3-407e-a28b-1d23b27a0299/ovsdbserver-sb/0.log" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.487804 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"869db5eb-b0d3-407e-a28b-1d23b27a0299","Type":"ContainerDied","Data":"d4b3c82b0732fc02916a85c35db89df6aa5ff66a9feedd6cbc3955253d6ce79b"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.487896 4834 scope.go:117] "RemoveContainer" containerID="1a272bf3dc5a9407cead8d9f3ebf4fd78348c18107084c4a2acf29f42c67cd6a" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.488298 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.554714 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" event={"ID":"950459bc-faae-4448-8cf7-289275204041","Type":"ContainerDied","Data":"8dd5ea4d35cc429e85a3f341dec3c325095e84c934b2737f7c79ecf1a2d7be85"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.554857 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-snm6f" Jan 21 14:56:13 crc kubenswrapper[4834]: E0121 14:56:13.556391 4834 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:56:13 crc kubenswrapper[4834]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: if [ -n "cinder" ]; then Jan 21 14:56:13 crc kubenswrapper[4834]: GRANT_DATABASE="cinder" Jan 21 14:56:13 crc kubenswrapper[4834]: else Jan 21 14:56:13 crc kubenswrapper[4834]: GRANT_DATABASE="*" Jan 21 14:56:13 crc kubenswrapper[4834]: fi Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: # going for maximum compatibility here: Jan 21 14:56:13 crc kubenswrapper[4834]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:56:13 crc kubenswrapper[4834]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:56:13 crc kubenswrapper[4834]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:56:13 crc kubenswrapper[4834]: # support updates Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:56:13 crc kubenswrapper[4834]: E0121 14:56:13.557775 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-1e5f-account-create-update-sm488" podUID="9effd323-5ae3-4ed8-a2e0-4659dd231bbd" Jan 21 14:56:13 crc kubenswrapper[4834]: E0121 14:56:13.574021 4834 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:56:13 crc kubenswrapper[4834]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: if [ -n "neutron" ]; then Jan 21 14:56:13 crc kubenswrapper[4834]: GRANT_DATABASE="neutron" Jan 21 14:56:13 crc kubenswrapper[4834]: else Jan 21 14:56:13 crc kubenswrapper[4834]: GRANT_DATABASE="*" Jan 21 14:56:13 crc kubenswrapper[4834]: fi Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: # going for maximum compatibility here: Jan 21 14:56:13 crc kubenswrapper[4834]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:56:13 crc kubenswrapper[4834]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:56:13 crc kubenswrapper[4834]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:56:13 crc kubenswrapper[4834]: # support updates Jan 21 14:56:13 crc kubenswrapper[4834]: Jan 21 14:56:13 crc kubenswrapper[4834]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:56:13 crc kubenswrapper[4834]: E0121 14:56:13.575479 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-bc05-account-create-update-sxz6w" podUID="093c745f-d0bf-4c8e-aceb-c40d42ad2ae5" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.578843 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/869db5eb-b0d3-407e-a28b-1d23b27a0299-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.578871 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfwzt\" (UniqueName: \"kubernetes.io/projected/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-kube-api-access-lfwzt\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.578894 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.578936 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlbmt\" (UniqueName: \"kubernetes.io/projected/04a45f24-7164-403c-954f-5ff46c148c5a-kube-api-access-nlbmt\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.578948 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.578958 4834 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.578968 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.578978 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpvzs\" (UniqueName: \"kubernetes.io/projected/869db5eb-b0d3-407e-a28b-1d23b27a0299-kube-api-access-bpvzs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.578995 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.579007 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr8br\" (UniqueName: \"kubernetes.io/projected/950459bc-faae-4448-8cf7-289275204041-kube-api-access-kr8br\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.579017 4834 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/04a45f24-7164-403c-954f-5ff46c148c5a-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.579027 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a45f24-7164-403c-954f-5ff46c148c5a-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.579037 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhc9d\" (UniqueName: \"kubernetes.io/projected/972527b7-5fbf-4cb1-9495-155dd778bba6-kube-api-access-qhc9d\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.579047 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/972527b7-5fbf-4cb1-9495-155dd778bba6-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.588034 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fxzqd"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.595941 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bc05-account-create-update-sxz6w"] Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.618179 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "950459bc-faae-4448-8cf7-289275204041" (UID: "950459bc-faae-4448-8cf7-289275204041"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.646295 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.646496 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.646555 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.646627 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.646640 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.646772 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.646861 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.647017 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.647151 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.647249 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.647403 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.647614 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.647715 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.647787 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.647847 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.647904 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.653206 4834 generic.go:334] "Generic (PLEG): container finished" podID="67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" containerID="d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830" exitCode=137 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.653524 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.665284 4834 generic.go:334] "Generic (PLEG): container finished" podID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerID="e22110aadbd884384da0840ce5e19cbdcf2bf4d1b9fffe3c6126085fdedeb6df" exitCode=143 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.665398 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" event={"ID":"67cf94c8-2d73-4940-a873-775f2cba8ce5","Type":"ContainerDied","Data":"e22110aadbd884384da0840ce5e19cbdcf2bf4d1b9fffe3c6126085fdedeb6df"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.666120 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" (UID: "67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.672111 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "972527b7-5fbf-4cb1-9495-155dd778bba6" (UID: "972527b7-5fbf-4cb1-9495-155dd778bba6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.672350 4834 generic.go:334] "Generic (PLEG): container finished" podID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerID="dfd1d1f4b01cbe2d02e24b2d79800e43e22f1f1484f9243a7619cca4445cac51" exitCode=143 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.672423 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10412867-64ac-413b-8f2f-9bdac2bb8759","Type":"ContainerDied","Data":"dfd1d1f4b01cbe2d02e24b2d79800e43e22f1f1484f9243a7619cca4445cac51"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.676291 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xhsvz_04a45f24-7164-403c-954f-5ff46c148c5a/openstack-network-exporter/0.log" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.676400 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xhsvz" event={"ID":"04a45f24-7164-403c-954f-5ff46c148c5a","Type":"ContainerDied","Data":"7316f0b465ba7302445a100be55e5fdd0ce2f5a8a584dd0aec6aa55a6003dedb"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.676494 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xhsvz" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.683335 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.683361 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.683374 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: E0121 14:56:13.683472 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:13 crc kubenswrapper[4834]: E0121 14:56:13.683520 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data podName:b87b73b4-2715-4ce7-81b3-df0c1f57922f nodeName:}" failed. No retries permitted until 2026-01-21 14:56:17.683505045 +0000 UTC m=+1523.657854090 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data") pod "rabbitmq-cell1-server-0" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.704236 4834 generic.go:334] "Generic (PLEG): container finished" podID="5f229152-a987-497e-8777-937b4f6880d0" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" exitCode=0 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.704345 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztq6r" event={"ID":"5f229152-a987-497e-8777-937b4f6880d0","Type":"ContainerDied","Data":"71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.721156 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_972527b7-5fbf-4cb1-9495-155dd778bba6/ovsdbserver-nb/0.log" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.721333 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.722250 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"972527b7-5fbf-4cb1-9495-155dd778bba6","Type":"ContainerDied","Data":"05523ee29bbaa4f86d7bd548893b96ad95e1068fa23a45ec488633673a70db02"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.752874 4834 generic.go:334] "Generic (PLEG): container finished" podID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerID="b12a6f3114054a9898093e473e21761dd4f9e13b5dde59cc5ef5df0092d0285d" exitCode=143 Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.752939 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2","Type":"ContainerDied","Data":"b12a6f3114054a9898093e473e21761dd4f9e13b5dde59cc5ef5df0092d0285d"} Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.761431 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-config" (OuterVolumeSpecName: "config") pod "950459bc-faae-4448-8cf7-289275204041" (UID: "950459bc-faae-4448-8cf7-289275204041"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.785554 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.795027 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" (UID: "67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.799529 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.813569 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04a45f24-7164-403c-954f-5ff46c148c5a" (UID: "04a45f24-7164-403c-954f-5ff46c148c5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.816111 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "950459bc-faae-4448-8cf7-289275204041" (UID: "950459bc-faae-4448-8cf7-289275204041"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.823613 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" (UID: "67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.857317 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "950459bc-faae-4448-8cf7-289275204041" (UID: "950459bc-faae-4448-8cf7-289275204041"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.862233 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.866515 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "04a45f24-7164-403c-954f-5ff46c148c5a" (UID: "04a45f24-7164-403c-954f-5ff46c148c5a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.892720 4834 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.892763 4834 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.892776 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.892784 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.892813 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.892821 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.892831 4834 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.892840 4834 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a45f24-7164-403c-954f-5ff46c148c5a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.947171 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "869db5eb-b0d3-407e-a28b-1d23b27a0299" (UID: "869db5eb-b0d3-407e-a28b-1d23b27a0299"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.947224 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "869db5eb-b0d3-407e-a28b-1d23b27a0299" (UID: "869db5eb-b0d3-407e-a28b-1d23b27a0299"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.994145 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:13 crc kubenswrapper[4834]: I0121 14:56:13.994285 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.014703 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "950459bc-faae-4448-8cf7-289275204041" (UID: "950459bc-faae-4448-8cf7-289275204041"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.044238 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "869db5eb-b0d3-407e-a28b-1d23b27a0299" (UID: "869db5eb-b0d3-407e-a28b-1d23b27a0299"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.046832 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "972527b7-5fbf-4cb1-9495-155dd778bba6" (UID: "972527b7-5fbf-4cb1-9495-155dd778bba6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.059693 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-56587d777c-2rx88"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.060099 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-56587d777c-2rx88" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-httpd" containerID="cri-o://ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49" gracePeriod=30 Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.060753 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-56587d777c-2rx88" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-server" containerID="cri-o://f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5" gracePeriod=30 Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.075279 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "972527b7-5fbf-4cb1-9495-155dd778bba6" (UID: "972527b7-5fbf-4cb1-9495-155dd778bba6"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.095835 4834 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/869db5eb-b0d3-407e-a28b-1d23b27a0299-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.095865 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.095874 4834 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/972527b7-5fbf-4cb1-9495-155dd778bba6-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.095885 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950459bc-faae-4448-8cf7-289275204041-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.104650 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1045-account-create-update-7ssd5"] Jan 21 14:56:14 crc kubenswrapper[4834]: W0121 14:56:14.108473 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28da9710_d30d_4fe5_ab02_aadd9b32ab1e.slice/crio-6077beabe0839a0b10e944998fdc96f749ea64c0c4635de3e7066861e44d308c WatchSource:0}: Error finding container 6077beabe0839a0b10e944998fdc96f749ea64c0c4635de3e7066861e44d308c: Status 404 returned error can't find the container with id 6077beabe0839a0b10e944998fdc96f749ea64c0c4635de3e7066861e44d308c Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.140201 4834 scope.go:117] "RemoveContainer" containerID="fc16a0399ebec76c3711f257960965a75e78472dd08cb4f571245e6cd9a2da01" Jan 21 14:56:14 crc kubenswrapper[4834]: E0121 14:56:14.140815 4834 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 14:56:14 crc kubenswrapper[4834]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 21 14:56:14 crc kubenswrapper[4834]: Jan 21 14:56:14 crc kubenswrapper[4834]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 21 14:56:14 crc kubenswrapper[4834]: Jan 21 14:56:14 crc kubenswrapper[4834]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 21 14:56:14 crc kubenswrapper[4834]: Jan 21 14:56:14 crc kubenswrapper[4834]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 21 14:56:14 crc kubenswrapper[4834]: Jan 21 14:56:14 crc kubenswrapper[4834]: if [ -n "nova_api" ]; then Jan 21 14:56:14 crc kubenswrapper[4834]: GRANT_DATABASE="nova_api" Jan 21 14:56:14 crc kubenswrapper[4834]: else Jan 21 14:56:14 crc kubenswrapper[4834]: GRANT_DATABASE="*" Jan 21 14:56:14 crc kubenswrapper[4834]: fi Jan 21 14:56:14 crc kubenswrapper[4834]: Jan 21 14:56:14 crc kubenswrapper[4834]: # going for maximum compatibility here: Jan 21 14:56:14 crc kubenswrapper[4834]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 21 14:56:14 crc kubenswrapper[4834]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 21 14:56:14 crc kubenswrapper[4834]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 21 14:56:14 crc kubenswrapper[4834]: # support updates Jan 21 14:56:14 crc kubenswrapper[4834]: Jan 21 14:56:14 crc kubenswrapper[4834]: $MYSQL_CMD < logger="UnhandledError" Jan 21 14:56:14 crc kubenswrapper[4834]: E0121 14:56:14.147344 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-1045-account-create-update-7ssd5" podUID="28da9710-d30d-4fe5-ab02-aadd9b32ab1e" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.232069 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-xhsvz"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.246419 4834 scope.go:117] "RemoveContainer" containerID="83e77c488c0398699d529ac8576e497423799fcbdb4b72fb82babb210f1861cb" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.248242 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-xhsvz"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.256802 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snm6f"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.270158 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-snm6f"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.289463 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.300410 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.334114 4834 scope.go:117] "RemoveContainer" containerID="b8baf943eda50c2543d74dbf54fc9dba0c6d901b97d6e43e93a94ac11756ab06" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.356800 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a45f24-7164-403c-954f-5ff46c148c5a" path="/var/lib/kubelet/pods/04a45f24-7164-403c-954f-5ff46c148c5a/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.358791 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115e04e0-b027-42bb-bdbf-f860ef73aef3" path="/var/lib/kubelet/pods/115e04e0-b027-42bb-bdbf-f860ef73aef3/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.360444 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30331d52-9c56-49cd-8b97-0869941cad41" path="/var/lib/kubelet/pods/30331d52-9c56-49cd-8b97-0869941cad41/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.363436 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5b0fd2-ece5-4adc-9b6b-25103f997228" path="/var/lib/kubelet/pods/3f5b0fd2-ece5-4adc-9b6b-25103f997228/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.364203 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2" path="/var/lib/kubelet/pods/67d4cbcb-8b7f-476e-8fe1-3b16c4d052c2/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.365824 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7775589a-98d8-4291-9c93-26bb67d1c99f" path="/var/lib/kubelet/pods/7775589a-98d8-4291-9c93-26bb67d1c99f/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.376876 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869db5eb-b0d3-407e-a28b-1d23b27a0299" path="/var/lib/kubelet/pods/869db5eb-b0d3-407e-a28b-1d23b27a0299/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.377553 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d2f6b9b-bf7b-4026-9269-7f77233ec402" path="/var/lib/kubelet/pods/8d2f6b9b-bf7b-4026-9269-7f77233ec402/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.378119 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950459bc-faae-4448-8cf7-289275204041" path="/var/lib/kubelet/pods/950459bc-faae-4448-8cf7-289275204041/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.388361 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963497bf-0dd8-4d5c-a046-360dbfdaf2a6" path="/var/lib/kubelet/pods/963497bf-0dd8-4d5c-a046-360dbfdaf2a6/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.428318 4834 scope.go:117] "RemoveContainer" containerID="d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.439094 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97616acb-173e-474b-a7f7-dcbc8bd2f0a6" path="/var/lib/kubelet/pods/97616acb-173e-474b-a7f7-dcbc8bd2f0a6/volumes" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.498202 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.529348 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-kolla-config\") pod \"73b312a8-0dee-488f-b998-4653b1cce8be\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.529433 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-combined-ca-bundle\") pod \"73b312a8-0dee-488f-b998-4653b1cce8be\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.529541 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-operator-scripts\") pod \"73b312a8-0dee-488f-b998-4653b1cce8be\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.529606 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-generated\") pod \"73b312a8-0dee-488f-b998-4653b1cce8be\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.529710 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzvtx\" (UniqueName: \"kubernetes.io/projected/73b312a8-0dee-488f-b998-4653b1cce8be-kube-api-access-zzvtx\") pod \"73b312a8-0dee-488f-b998-4653b1cce8be\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.529822 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"73b312a8-0dee-488f-b998-4653b1cce8be\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.529894 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-default\") pod \"73b312a8-0dee-488f-b998-4653b1cce8be\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.529953 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-galera-tls-certs\") pod \"73b312a8-0dee-488f-b998-4653b1cce8be\" (UID: \"73b312a8-0dee-488f-b998-4653b1cce8be\") " Jan 21 14:56:14 crc kubenswrapper[4834]: E0121 14:56:14.530402 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:56:14 crc kubenswrapper[4834]: E0121 14:56:14.530455 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data podName:df9714a2-fadf-48a3-8b71-07d7419cc713 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:18.530438351 +0000 UTC m=+1524.504787396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data") pod "rabbitmq-server-0" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713") : configmap "rabbitmq-config-data" not found Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.531823 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "73b312a8-0dee-488f-b998-4653b1cce8be" (UID: "73b312a8-0dee-488f-b998-4653b1cce8be"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.533876 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "73b312a8-0dee-488f-b998-4653b1cce8be" (UID: "73b312a8-0dee-488f-b998-4653b1cce8be"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.535153 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "73b312a8-0dee-488f-b998-4653b1cce8be" (UID: "73b312a8-0dee-488f-b998-4653b1cce8be"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.535279 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73b312a8-0dee-488f-b998-4653b1cce8be" (UID: "73b312a8-0dee-488f-b998-4653b1cce8be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.543404 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b312a8-0dee-488f-b998-4653b1cce8be-kube-api-access-zzvtx" (OuterVolumeSpecName: "kube-api-access-zzvtx") pod "73b312a8-0dee-488f-b998-4653b1cce8be" (UID: "73b312a8-0dee-488f-b998-4653b1cce8be"). InnerVolumeSpecName "kube-api-access-zzvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.631489 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.631514 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.631523 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzvtx\" (UniqueName: \"kubernetes.io/projected/73b312a8-0dee-488f-b998-4653b1cce8be-kube-api-access-zzvtx\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.631532 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.631541 4834 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/73b312a8-0dee-488f-b998-4653b1cce8be-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.640123 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73b312a8-0dee-488f-b998-4653b1cce8be" (UID: "73b312a8-0dee-488f-b998-4653b1cce8be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.655424 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.667598 4834 scope.go:117] "RemoveContainer" containerID="d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830" Jan 21 14:56:14 crc kubenswrapper[4834]: E0121 14:56:14.669094 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830\": container with ID starting with d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830 not found: ID does not exist" containerID="d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.669134 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830"} err="failed to get container status \"d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830\": rpc error: code = NotFound desc = could not find container \"d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830\": container with ID starting with d0a7ab2cdc51b546372352afbcac0ac65babf4a304c1b52f2da43b82d8f74830 not found: ID does not exist" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.669161 4834 scope.go:117] "RemoveContainer" containerID="e3bb8d41e0b523f5b37a3567db174269b4e1271905c76b1d4dc36d52baf6dbf3" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.672635 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.674294 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "73b312a8-0dee-488f-b998-4653b1cce8be" (UID: "73b312a8-0dee-488f-b998-4653b1cce8be"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.730204 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "73b312a8-0dee-488f-b998-4653b1cce8be" (UID: "73b312a8-0dee-488f-b998-4653b1cce8be"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.750740 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.750840 4834 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.759334 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b312a8-0dee-488f-b998-4653b1cce8be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:14 crc kubenswrapper[4834]: E0121 14:56:14.809881 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="537c85e6c54e2bb1768b64164ebd5a9c9403e16796d84252d7cce9d6047e6d4f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.811508 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.828129 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1045-account-create-update-7ssd5" event={"ID":"28da9710-d30d-4fe5-ab02-aadd9b32ab1e","Type":"ContainerStarted","Data":"6077beabe0839a0b10e944998fdc96f749ea64c0c4635de3e7066861e44d308c"} Jan 21 14:56:14 crc kubenswrapper[4834]: E0121 14:56:14.841602 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="537c85e6c54e2bb1768b64164ebd5a9c9403e16796d84252d7cce9d6047e6d4f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.857860 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc05-account-create-update-sxz6w" event={"ID":"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5","Type":"ContainerStarted","Data":"82484a5e786c32c515fbbb7fbe3adb4a879e90b3531cc6bd3bb7afff5a7da4bb"} Jan 21 14:56:14 crc kubenswrapper[4834]: E0121 14:56:14.878639 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="537c85e6c54e2bb1768b64164ebd5a9c9403e16796d84252d7cce9d6047e6d4f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:56:14 crc kubenswrapper[4834]: E0121 14:56:14.879200 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="61306868-aa06-4574-a568-b36b22fd6db6" containerName="nova-cell0-conductor-conductor" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.911585 4834 generic.go:334] "Generic (PLEG): container finished" podID="73b312a8-0dee-488f-b998-4653b1cce8be" containerID="939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99" exitCode=0 Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.911981 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.913310 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"73b312a8-0dee-488f-b998-4653b1cce8be","Type":"ContainerDied","Data":"939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99"} Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.913366 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"73b312a8-0dee-488f-b998-4653b1cce8be","Type":"ContainerDied","Data":"4d11fa71a06c241757e6a30a100815e378367ba7c41bcd344fba0cc251dfe7fc"} Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.949748 4834 generic.go:334] "Generic (PLEG): container finished" podID="1e74faea-a792-455c-a253-7012f98c6acf" containerID="db044be1aefe255c10ce8baeb7d8226a8e00d8f2d732191c49cc1dce3a593cd6" exitCode=0 Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.949866 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1e74faea-a792-455c-a253-7012f98c6acf","Type":"ContainerDied","Data":"db044be1aefe255c10ce8baeb7d8226a8e00d8f2d732191c49cc1dce3a593cd6"} Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.949908 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1e74faea-a792-455c-a253-7012f98c6acf","Type":"ContainerDied","Data":"d223903295050d2c693147a86f9aa565224340b5b63df996f2e8a775a145cba1"} Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.949935 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d223903295050d2c693147a86f9aa565224340b5b63df996f2e8a775a145cba1" Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.956870 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1e5f-account-create-update-sm488" event={"ID":"9effd323-5ae3-4ed8-a2e0-4659dd231bbd","Type":"ContainerStarted","Data":"169b3fe624ac21635f273cad7f3e8ffc2ca34497db222d84f7c2641428255804"} Jan 21 14:56:14 crc kubenswrapper[4834]: I0121 14:56:14.986009 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.001162 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ffx" event={"ID":"3bd802b1-4bcc-4604-a82e-5e84a0f0338e","Type":"ContainerStarted","Data":"f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5"} Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.022997 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.051606 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s2ffx" podStartSLOduration=4.01277631 podStartE2EDuration="8.051574845s" podCreationTimestamp="2026-01-21 14:56:07 +0000 UTC" firstStartedPulling="2026-01-21 14:56:08.470187534 +0000 UTC m=+1514.444536569" lastFinishedPulling="2026-01-21 14:56:12.508986059 +0000 UTC m=+1518.483335104" observedRunningTime="2026-01-21 14:56:15.045550337 +0000 UTC m=+1521.019899392" watchObservedRunningTime="2026-01-21 14:56:15.051574845 +0000 UTC m=+1521.025923890" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.062677 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9ca3-account-create-update-prg2p" event={"ID":"8f811e2a-291e-401d-9b0f-32146bad80ac","Type":"ContainerStarted","Data":"c69e537756732cd275d2dad2fc022945cc928c987eb5958b32759ce5d87c47b4"} Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.084719 4834 scope.go:117] "RemoveContainer" containerID="a73cfecc848bf12c70aa981c92c2bea59b70ce49b289daf7e8fd02a4758aca8a" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.087367 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data-custom\") pod \"1e74faea-a792-455c-a253-7012f98c6acf\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.087424 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mccxd\" (UniqueName: \"kubernetes.io/projected/1e74faea-a792-455c-a253-7012f98c6acf-kube-api-access-mccxd\") pod \"1e74faea-a792-455c-a253-7012f98c6acf\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.087482 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-combined-ca-bundle\") pod \"1e74faea-a792-455c-a253-7012f98c6acf\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.087525 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e74faea-a792-455c-a253-7012f98c6acf-etc-machine-id\") pod \"1e74faea-a792-455c-a253-7012f98c6acf\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.087546 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data\") pod \"1e74faea-a792-455c-a253-7012f98c6acf\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.087667 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-scripts\") pod \"1e74faea-a792-455c-a253-7012f98c6acf\" (UID: \"1e74faea-a792-455c-a253-7012f98c6acf\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.088444 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e74faea-a792-455c-a253-7012f98c6acf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1e74faea-a792-455c-a253-7012f98c6acf" (UID: "1e74faea-a792-455c-a253-7012f98c6acf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.104692 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-scripts" (OuterVolumeSpecName: "scripts") pod "1e74faea-a792-455c-a253-7012f98c6acf" (UID: "1e74faea-a792-455c-a253-7012f98c6acf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.109165 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e74faea-a792-455c-a253-7012f98c6acf-kube-api-access-mccxd" (OuterVolumeSpecName: "kube-api-access-mccxd") pod "1e74faea-a792-455c-a253-7012f98c6acf" (UID: "1e74faea-a792-455c-a253-7012f98c6acf"). InnerVolumeSpecName "kube-api-access-mccxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.113103 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1e74faea-a792-455c-a253-7012f98c6acf" (UID: "1e74faea-a792-455c-a253-7012f98c6acf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.119393 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.121504 4834 scope.go:117] "RemoveContainer" containerID="f5c669d4fc60c3fe1ef25f2b49cb0ecb5679b1b1b63b75418dcd0684cfa0bca4" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.122016 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxzqd" event={"ID":"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f","Type":"ContainerStarted","Data":"f5c669d4fc60c3fe1ef25f2b49cb0ecb5679b1b1b63b75418dcd0684cfa0bca4"} Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.122055 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxzqd" event={"ID":"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f","Type":"ContainerStarted","Data":"5fe711ac553511d0849ce5ef19651d90e9b61fb81a47b6912fa814f1824bba2d"} Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.124502 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.137814 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.145554 4834 generic.go:334] "Generic (PLEG): container finished" podID="15dc9d10-a46a-4fec-b061-2e72caace933" containerID="bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a" exitCode=0 Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.145721 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"15dc9d10-a46a-4fec-b061-2e72caace933","Type":"ContainerDied","Data":"bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a"} Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.145766 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"15dc9d10-a46a-4fec-b061-2e72caace933","Type":"ContainerDied","Data":"fc1492608f385723a53877e72a1382e2f5bc8050dc102c558d3d924800fffc74"} Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.189417 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-vencrypt-tls-certs\") pod \"15dc9d10-a46a-4fec-b061-2e72caace933\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.191224 4834 generic.go:334] "Generic (PLEG): container finished" podID="725f14ad-f7a0-4d41-813e-19161c405300" containerID="ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49" exitCode=0 Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.191254 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56587d777c-2rx88" event={"ID":"725f14ad-f7a0-4d41-813e-19161c405300","Type":"ContainerDied","Data":"ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49"} Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.204271 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-nova-novncproxy-tls-certs\") pod \"15dc9d10-a46a-4fec-b061-2e72caace933\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.204401 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-combined-ca-bundle\") pod \"15dc9d10-a46a-4fec-b061-2e72caace933\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.204470 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdm5x\" (UniqueName: \"kubernetes.io/projected/15dc9d10-a46a-4fec-b061-2e72caace933-kube-api-access-cdm5x\") pod \"15dc9d10-a46a-4fec-b061-2e72caace933\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.204571 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-config-data\") pod \"15dc9d10-a46a-4fec-b061-2e72caace933\" (UID: \"15dc9d10-a46a-4fec-b061-2e72caace933\") " Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.212424 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15dc9d10-a46a-4fec-b061-2e72caace933-kube-api-access-cdm5x" (OuterVolumeSpecName: "kube-api-access-cdm5x") pod "15dc9d10-a46a-4fec-b061-2e72caace933" (UID: "15dc9d10-a46a-4fec-b061-2e72caace933"). InnerVolumeSpecName "kube-api-access-cdm5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.213100 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e74faea-a792-455c-a253-7012f98c6acf" (UID: "1e74faea-a792-455c-a253-7012f98c6acf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.217271 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.217300 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mccxd\" (UniqueName: \"kubernetes.io/projected/1e74faea-a792-455c-a253-7012f98c6acf-kube-api-access-mccxd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.217317 4834 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e74faea-a792-455c-a253-7012f98c6acf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.225064 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.234697 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-config-data" (OuterVolumeSpecName: "config-data") pod "15dc9d10-a46a-4fec-b061-2e72caace933" (UID: "15dc9d10-a46a-4fec-b061-2e72caace933"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.274830 4834 scope.go:117] "RemoveContainer" containerID="3e35deec45ae37578385bbd09aa3545fddecd1f4ec1244e186609259be21731f" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.298198 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15dc9d10-a46a-4fec-b061-2e72caace933" (UID: "15dc9d10-a46a-4fec-b061-2e72caace933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.298417 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": read tcp 10.217.0.2:42750->10.217.0.168:8776: read: connection reset by peer" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.305277 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data" (OuterVolumeSpecName: "config-data") pod "1e74faea-a792-455c-a253-7012f98c6acf" (UID: "1e74faea-a792-455c-a253-7012f98c6acf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.327751 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.327790 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdm5x\" (UniqueName: \"kubernetes.io/projected/15dc9d10-a46a-4fec-b061-2e72caace933-kube-api-access-cdm5x\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.327803 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.327816 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.327828 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e74faea-a792-455c-a253-7012f98c6acf-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.365721 4834 scope.go:117] "RemoveContainer" containerID="939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.384986 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.396507 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9cc4e607542ea15206bb43978347800db816a791a94de96842f843bffbc0cd73" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.405143 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "15dc9d10-a46a-4fec-b061-2e72caace933" (UID: "15dc9d10-a46a-4fec-b061-2e72caace933"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.406214 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9cc4e607542ea15206bb43978347800db816a791a94de96842f843bffbc0cd73" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.408308 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9cc4e607542ea15206bb43978347800db816a791a94de96842f843bffbc0cd73" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.408350 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="07ff4f13-b754-4f82-accc-54ed420dce2e" containerName="nova-cell1-conductor-conductor" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.431410 4834 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.441809 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" probeResult="failure" output=< Jan 21 14:56:15 crc kubenswrapper[4834]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 21 14:56:15 crc kubenswrapper[4834]: > Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.502430 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d770e00807ea81d0baadc2453ac237e13a01ff3a0bba99f60d58f1c729e3861" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.515240 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "15dc9d10-a46a-4fec-b061-2e72caace933" (UID: "15dc9d10-a46a-4fec-b061-2e72caace933"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.521994 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d770e00807ea81d0baadc2453ac237e13a01ff3a0bba99f60d58f1c729e3861" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.527809 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d770e00807ea81d0baadc2453ac237e13a01ff3a0bba99f60d58f1c729e3861" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.527939 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e9503bd6-1084-408a-8e1d-65d66dab4170" containerName="nova-scheduler-scheduler" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.534787 4834 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/15dc9d10-a46a-4fec-b061-2e72caace933-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.551234 4834 scope.go:117] "RemoveContainer" containerID="88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.620309 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.627666 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.630975 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.631045 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.632237 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.646141 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.682335 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.682443 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.733247 4834 scope.go:117] "RemoveContainer" containerID="939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.776396 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99\": container with ID starting with 939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99 not found: ID does not exist" containerID="939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.776443 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99"} err="failed to get container status \"939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99\": rpc error: code = NotFound desc = could not find container \"939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99\": container with ID starting with 939bd6e74e83f3325f9685398aa0fa63d0f90ffd42a97382ca1e0c57a918fd99 not found: ID does not exist" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.776476 4834 scope.go:117] "RemoveContainer" containerID="88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.777524 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a\": container with ID starting with 88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a not found: ID does not exist" containerID="88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.777554 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a"} err="failed to get container status \"88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a\": rpc error: code = NotFound desc = could not find container \"88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a\": container with ID starting with 88dbe110bbf227bf08d6c0e459b1009c3ec0c639f92aa67737748dc94c09178a not found: ID does not exist" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.777571 4834 scope.go:117] "RemoveContainer" containerID="bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.893741 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ptknb"] Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894290 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerName="ovsdbserver-nb" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894306 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerName="ovsdbserver-nb" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894317 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e74faea-a792-455c-a253-7012f98c6acf" containerName="probe" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894324 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e74faea-a792-455c-a253-7012f98c6acf" containerName="probe" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894348 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a45f24-7164-403c-954f-5ff46c148c5a" containerName="openstack-network-exporter" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894357 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a45f24-7164-403c-954f-5ff46c148c5a" containerName="openstack-network-exporter" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894366 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b312a8-0dee-488f-b998-4653b1cce8be" containerName="galera" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894375 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b312a8-0dee-488f-b998-4653b1cce8be" containerName="galera" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894391 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950459bc-faae-4448-8cf7-289275204041" containerName="init" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894398 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="950459bc-faae-4448-8cf7-289275204041" containerName="init" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894414 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950459bc-faae-4448-8cf7-289275204041" containerName="dnsmasq-dns" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894421 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="950459bc-faae-4448-8cf7-289275204041" containerName="dnsmasq-dns" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894434 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e74faea-a792-455c-a253-7012f98c6acf" containerName="cinder-scheduler" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894443 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e74faea-a792-455c-a253-7012f98c6acf" containerName="cinder-scheduler" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894459 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15dc9d10-a46a-4fec-b061-2e72caace933" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894465 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="15dc9d10-a46a-4fec-b061-2e72caace933" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894485 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerName="ovsdbserver-sb" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894491 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerName="ovsdbserver-sb" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894508 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b312a8-0dee-488f-b998-4653b1cce8be" containerName="mysql-bootstrap" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894515 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b312a8-0dee-488f-b998-4653b1cce8be" containerName="mysql-bootstrap" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894525 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerName="openstack-network-exporter" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894532 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerName="openstack-network-exporter" Jan 21 14:56:15 crc kubenswrapper[4834]: E0121 14:56:15.894541 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerName="openstack-network-exporter" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894548 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerName="openstack-network-exporter" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894766 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerName="openstack-network-exporter" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894780 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a45f24-7164-403c-954f-5ff46c148c5a" containerName="openstack-network-exporter" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894799 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerName="openstack-network-exporter" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894815 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b312a8-0dee-488f-b998-4653b1cce8be" containerName="galera" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894830 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e74faea-a792-455c-a253-7012f98c6acf" containerName="probe" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894838 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="950459bc-faae-4448-8cf7-289275204041" containerName="dnsmasq-dns" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894847 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" containerName="ovsdbserver-nb" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894855 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="869db5eb-b0d3-407e-a28b-1d23b27a0299" containerName="ovsdbserver-sb" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894872 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="15dc9d10-a46a-4fec-b061-2e72caace933" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.894883 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e74faea-a792-455c-a253-7012f98c6acf" containerName="cinder-scheduler" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.898338 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.936338 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptknb"] Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.968834 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:15 crc kubenswrapper[4834]: I0121 14:56:15.991264 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.059539 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-catalog-content\") pod \"redhat-marketplace-ptknb\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.059693 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w2zs\" (UniqueName: \"kubernetes.io/projected/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-kube-api-access-9w2zs\") pod \"redhat-marketplace-ptknb\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.059757 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-utilities\") pod \"redhat-marketplace-ptknb\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.060201 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.072129 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:56216->10.217.0.166:9311: read: connection reset by peer" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.072452 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:56208->10.217.0.166:9311: read: connection reset by peer" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.073248 4834 scope.go:117] "RemoveContainer" containerID="bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a" Jan 21 14:56:16 crc kubenswrapper[4834]: E0121 14:56:16.075350 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a\": container with ID starting with bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a not found: ID does not exist" containerID="bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.075387 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a"} err="failed to get container status \"bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a\": rpc error: code = NotFound desc = could not find container \"bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a\": container with ID starting with bb7ad61eaaa8821da0cb9c4d60f12892118fbb7482218ab57f441a19433dcd1a not found: ID does not exist" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.169218 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-operator-scripts\") pod \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\" (UID: \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.169700 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7clw\" (UniqueName: \"kubernetes.io/projected/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-kube-api-access-p7clw\") pod \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\" (UID: \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.169723 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nld7z\" (UniqueName: \"kubernetes.io/projected/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-kube-api-access-nld7z\") pod \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\" (UID: \"28da9710-d30d-4fe5-ab02-aadd9b32ab1e\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.169864 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-operator-scripts\") pod \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\" (UID: \"9effd323-5ae3-4ed8-a2e0-4659dd231bbd\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.169960 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvjjf\" (UniqueName: \"kubernetes.io/projected/8f811e2a-291e-401d-9b0f-32146bad80ac-kube-api-access-fvjjf\") pod \"8f811e2a-291e-401d-9b0f-32146bad80ac\" (UID: \"8f811e2a-291e-401d-9b0f-32146bad80ac\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.170078 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f811e2a-291e-401d-9b0f-32146bad80ac-operator-scripts\") pod \"8f811e2a-291e-401d-9b0f-32146bad80ac\" (UID: \"8f811e2a-291e-401d-9b0f-32146bad80ac\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.170705 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9effd323-5ae3-4ed8-a2e0-4659dd231bbd" (UID: "9effd323-5ae3-4ed8-a2e0-4659dd231bbd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.170826 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w2zs\" (UniqueName: \"kubernetes.io/projected/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-kube-api-access-9w2zs\") pod \"redhat-marketplace-ptknb\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.171189 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-utilities\") pod \"redhat-marketplace-ptknb\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.171360 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-catalog-content\") pod \"redhat-marketplace-ptknb\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.171458 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.171961 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-catalog-content\") pod \"redhat-marketplace-ptknb\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.174832 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-utilities\") pod \"redhat-marketplace-ptknb\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.171190 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f811e2a-291e-401d-9b0f-32146bad80ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f811e2a-291e-401d-9b0f-32146bad80ac" (UID: "8f811e2a-291e-401d-9b0f-32146bad80ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.173788 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28da9710-d30d-4fe5-ab02-aadd9b32ab1e" (UID: "28da9710-d30d-4fe5-ab02-aadd9b32ab1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.207305 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-kube-api-access-p7clw" (OuterVolumeSpecName: "kube-api-access-p7clw") pod "9effd323-5ae3-4ed8-a2e0-4659dd231bbd" (UID: "9effd323-5ae3-4ed8-a2e0-4659dd231bbd"). InnerVolumeSpecName "kube-api-access-p7clw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.207521 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-kube-api-access-nld7z" (OuterVolumeSpecName: "kube-api-access-nld7z") pod "28da9710-d30d-4fe5-ab02-aadd9b32ab1e" (UID: "28da9710-d30d-4fe5-ab02-aadd9b32ab1e"). InnerVolumeSpecName "kube-api-access-nld7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.207654 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.208398 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w2zs\" (UniqueName: \"kubernetes.io/projected/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-kube-api-access-9w2zs\") pod \"redhat-marketplace-ptknb\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.213838 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f811e2a-291e-401d-9b0f-32146bad80ac-kube-api-access-fvjjf" (OuterVolumeSpecName: "kube-api-access-fvjjf") pod "8f811e2a-291e-401d-9b0f-32146bad80ac" (UID: "8f811e2a-291e-401d-9b0f-32146bad80ac"). InnerVolumeSpecName "kube-api-access-fvjjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.235816 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.251905 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.264747 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.273348 4834 generic.go:334] "Generic (PLEG): container finished" podID="725f14ad-f7a0-4d41-813e-19161c405300" containerID="f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5" exitCode=0 Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.273435 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56587d777c-2rx88" event={"ID":"725f14ad-f7a0-4d41-813e-19161c405300","Type":"ContainerDied","Data":"f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5"} Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.273473 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56587d777c-2rx88" event={"ID":"725f14ad-f7a0-4d41-813e-19161c405300","Type":"ContainerDied","Data":"96b3563f0bfbe774b9bce480c3e7fd09de71973ab324c248998a4b75431a80ea"} Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.273494 4834 scope.go:117] "RemoveContainer" containerID="f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.274787 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tlgq\" (UniqueName: \"kubernetes.io/projected/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-kube-api-access-5tlgq\") pod \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\" (UID: \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.274887 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-internal-tls-certs\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.274920 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-public-tls-certs\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.274980 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-etc-swift\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.275065 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-combined-ca-bundle\") pod \"61306868-aa06-4574-a568-b36b22fd6db6\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.275093 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48zhk\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-kube-api-access-48zhk\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.275130 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-run-httpd\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.275192 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-config-data\") pod \"61306868-aa06-4574-a568-b36b22fd6db6\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.275228 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-combined-ca-bundle\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.275275 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-config-data\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.275297 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-log-httpd\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.295823 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-operator-scripts\") pod \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\" (UID: \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.295909 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tc8g\" (UniqueName: \"kubernetes.io/projected/61306868-aa06-4574-a568-b36b22fd6db6-kube-api-access-2tc8g\") pod \"61306868-aa06-4574-a568-b36b22fd6db6\" (UID: \"61306868-aa06-4574-a568-b36b22fd6db6\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.296868 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f811e2a-291e-401d-9b0f-32146bad80ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.296885 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.296895 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7clw\" (UniqueName: \"kubernetes.io/projected/9effd323-5ae3-4ed8-a2e0-4659dd231bbd-kube-api-access-p7clw\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.296906 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nld7z\" (UniqueName: \"kubernetes.io/projected/28da9710-d30d-4fe5-ab02-aadd9b32ab1e-kube-api-access-nld7z\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.296915 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvjjf\" (UniqueName: \"kubernetes.io/projected/8f811e2a-291e-401d-9b0f-32146bad80ac-kube-api-access-fvjjf\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.323938 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.374860 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.377964 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-kube-api-access-5tlgq" (OuterVolumeSpecName: "kube-api-access-5tlgq") pod "093c745f-d0bf-4c8e-aceb-c40d42ad2ae5" (UID: "093c745f-d0bf-4c8e-aceb-c40d42ad2ae5"). InnerVolumeSpecName "kube-api-access-5tlgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.386155 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.386632 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "093c745f-d0bf-4c8e-aceb-c40d42ad2ae5" (UID: "093c745f-d0bf-4c8e-aceb-c40d42ad2ae5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: W0121 14:56:16.399920 4834 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/725f14ad-f7a0-4d41-813e-19161c405300/volumes/kubernetes.io~projected/etc-swift Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.410463 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.406950 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-kube-api-access-48zhk" (OuterVolumeSpecName: "kube-api-access-48zhk") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "kube-api-access-48zhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.399844 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-etc-swift\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.410782 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48zhk\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-kube-api-access-48zhk\") pod \"725f14ad-f7a0-4d41-813e-19161c405300\" (UID: \"725f14ad-f7a0-4d41-813e-19161c405300\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.411032 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-operator-scripts\") pod \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\" (UID: \"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.412102 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.412220 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/725f14ad-f7a0-4d41-813e-19161c405300-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.412288 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tlgq\" (UniqueName: \"kubernetes.io/projected/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-kube-api-access-5tlgq\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.412348 4834 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.418053 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1e5f-account-create-update-sm488" Jan 21 14:56:16 crc kubenswrapper[4834]: E0121 14:56:16.391461 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:56:16 crc kubenswrapper[4834]: W0121 14:56:16.419158 4834 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/725f14ad-f7a0-4d41-813e-19161c405300/volumes/kubernetes.io~projected/kube-api-access-48zhk Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.419233 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-kube-api-access-48zhk" (OuterVolumeSpecName: "kube-api-access-48zhk") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "kube-api-access-48zhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: W0121 14:56:16.419306 4834 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5/volumes/kubernetes.io~configmap/operator-scripts Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.419313 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "093c745f-d0bf-4c8e-aceb-c40d42ad2ae5" (UID: "093c745f-d0bf-4c8e-aceb-c40d42ad2ae5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.420751 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.435880 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61306868-aa06-4574-a568-b36b22fd6db6-kube-api-access-2tc8g" (OuterVolumeSpecName: "kube-api-access-2tc8g") pod "61306868-aa06-4574-a568-b36b22fd6db6" (UID: "61306868-aa06-4574-a568-b36b22fd6db6"). InnerVolumeSpecName "kube-api-access-2tc8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: E0121 14:56:16.436064 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.466209 4834 generic.go:334] "Generic (PLEG): container finished" podID="61306868-aa06-4574-a568-b36b22fd6db6" containerID="537c85e6c54e2bb1768b64164ebd5a9c9403e16796d84252d7cce9d6047e6d4f" exitCode=0 Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.466344 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.475289 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b312a8-0dee-488f-b998-4653b1cce8be" path="/var/lib/kubelet/pods/73b312a8-0dee-488f-b998-4653b1cce8be/volumes" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.476554 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="972527b7-5fbf-4cb1-9495-155dd778bba6" path="/var/lib/kubelet/pods/972527b7-5fbf-4cb1-9495-155dd778bba6/volumes" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.489709 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.516030 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48zhk\" (UniqueName: \"kubernetes.io/projected/725f14ad-f7a0-4d41-813e-19161c405300-kube-api-access-48zhk\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.524153 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.524169 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tc8g\" (UniqueName: \"kubernetes.io/projected/61306868-aa06-4574-a568-b36b22fd6db6-kube-api-access-2tc8g\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.521788 4834 generic.go:334] "Generic (PLEG): container finished" podID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerID="19ad5e1d36dc4cca92dc2d55710cdc46c1e446991cfdd3c7f69a92b2d721c0f8" exitCode=0 Jan 21 14:56:16 crc kubenswrapper[4834]: E0121 14:56:16.519437 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:56:16 crc kubenswrapper[4834]: E0121 14:56:16.526588 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="ovn-northd" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.520782 4834 scope.go:117] "RemoveContainer" containerID="ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.571402 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": dial tcp 10.217.0.205:8775: connect: connection refused" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.571392 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": dial tcp 10.217.0.205:8775: connect: connection refused" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.578444 4834 generic.go:334] "Generic (PLEG): container finished" podID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerID="85378de999d4b746f726db38c15173ce2ec1d4f11248e24fa1ad5e385ff441aa" exitCode=0 Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.584329 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.591682 4834 generic.go:334] "Generic (PLEG): container finished" podID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerID="4cdf01d893881ac1c03282850be08114203e3678d346c96a0db7e061046cf925" exitCode=0 Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.626605 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k9kv\" (UniqueName: \"kubernetes.io/projected/46ef0752-abe1-465f-8b0b-77906b861c12-kube-api-access-7k9kv\") pod \"46ef0752-abe1-465f-8b0b-77906b861c12\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.627072 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-public-tls-certs\") pod \"46ef0752-abe1-465f-8b0b-77906b861c12\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.627244 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ef0752-abe1-465f-8b0b-77906b861c12-logs\") pod \"46ef0752-abe1-465f-8b0b-77906b861c12\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.627414 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-internal-tls-certs\") pod \"46ef0752-abe1-465f-8b0b-77906b861c12\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.627514 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-scripts\") pod \"46ef0752-abe1-465f-8b0b-77906b861c12\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.627625 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-combined-ca-bundle\") pod \"46ef0752-abe1-465f-8b0b-77906b861c12\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.627811 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-config-data\") pod \"46ef0752-abe1-465f-8b0b-77906b861c12\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.628748 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46ef0752-abe1-465f-8b0b-77906b861c12-logs" (OuterVolumeSpecName: "logs") pod "46ef0752-abe1-465f-8b0b-77906b861c12" (UID: "46ef0752-abe1-465f-8b0b-77906b861c12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.643502 4834 generic.go:334] "Generic (PLEG): container finished" podID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerID="008e24c6de4a7972abcecfe67f07648df9fa7fff4e253229a970cbbe16f3e832" exitCode=0 Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.655734 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ef0752-abe1-465f-8b0b-77906b861c12-kube-api-access-7k9kv" (OuterVolumeSpecName: "kube-api-access-7k9kv") pod "46ef0752-abe1-465f-8b0b-77906b861c12" (UID: "46ef0752-abe1-465f-8b0b-77906b861c12"). InnerVolumeSpecName "kube-api-access-7k9kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.667552 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1045-account-create-update-7ssd5" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.682606 4834 generic.go:334] "Generic (PLEG): container finished" podID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" containerID="f5c669d4fc60c3fe1ef25f2b49cb0ecb5679b1b1b63b75418dcd0684cfa0bca4" exitCode=1 Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.710651 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-scripts" (OuterVolumeSpecName: "scripts") pod "46ef0752-abe1-465f-8b0b-77906b861c12" (UID: "46ef0752-abe1-465f-8b0b-77906b861c12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.726110 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61306868-aa06-4574-a568-b36b22fd6db6" (UID: "61306868-aa06-4574-a568-b36b22fd6db6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.733130 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.733187 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k9kv\" (UniqueName: \"kubernetes.io/projected/46ef0752-abe1-465f-8b0b-77906b861c12-kube-api-access-7k9kv\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.733200 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ef0752-abe1-465f-8b0b-77906b861c12-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.733211 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.757240 4834 generic.go:334] "Generic (PLEG): container finished" podID="46ef0752-abe1-465f-8b0b-77906b861c12" containerID="0b54aa54647967f433ead26be60f5ecca8009300967e54ee0684394f6f2cdd0b" exitCode=0 Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.757543 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69bb684bc8-6s7qv" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.759619 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-fxzqd" podStartSLOduration=7.75959243 podStartE2EDuration="7.75959243s" podCreationTimestamp="2026-01-21 14:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:56:16.709223502 +0000 UTC m=+1522.683572547" watchObservedRunningTime="2026-01-21 14:56:16.75959243 +0000 UTC m=+1522.733941495" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.790150 4834 generic.go:334] "Generic (PLEG): container finished" podID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerID="831f7e80b2dd51f5fba704c0975d18b653c249edfc68646cb8a57e93f78ca51e" exitCode=0 Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.813284 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-config-data" (OuterVolumeSpecName: "config-data") pod "61306868-aa06-4574-a568-b36b22fd6db6" (UID: "61306868-aa06-4574-a568-b36b22fd6db6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.825705 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc05-account-create-update-sxz6w" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.846534 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61306868-aa06-4574-a568-b36b22fd6db6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.871988 4834 generic.go:334] "Generic (PLEG): container finished" podID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerID="2ebbbaf00d9a369dd1ca5f46d2f165bdb5bf4264e991c134e7c3cd8817356d6f" exitCode=0 Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.887027 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9ca3-account-create-update-prg2p" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.892337 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.934358 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:16 crc kubenswrapper[4834]: I0121 14:56:16.975240 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.064865 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46ef0752-abe1-465f-8b0b-77906b861c12" (UID: "46ef0752-abe1-465f-8b0b-77906b861c12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.088528 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.097602 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.097655 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.113639 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.114170 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.169444 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-config-data" (OuterVolumeSpecName: "config-data") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.184757 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "725f14ad-f7a0-4d41-813e-19161c405300" (UID: "725f14ad-f7a0-4d41-813e-19161c405300"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.202235 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.202274 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725f14ad-f7a0-4d41-813e-19161c405300-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.211230 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-config-data" (OuterVolumeSpecName: "config-data") pod "46ef0752-abe1-465f-8b0b-77906b861c12" (UID: "46ef0752-abe1-465f-8b0b-77906b861c12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.250139 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "46ef0752-abe1-465f-8b0b-77906b861c12" (UID: "46ef0752-abe1-465f-8b0b-77906b861c12"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.322461 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "46ef0752-abe1-465f-8b0b-77906b861c12" (UID: "46ef0752-abe1-465f-8b0b-77906b861c12"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.329386 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-public-tls-certs\") pod \"46ef0752-abe1-465f-8b0b-77906b861c12\" (UID: \"46ef0752-abe1-465f-8b0b-77906b861c12\") " Jan 21 14:56:18 crc kubenswrapper[4834]: W0121 14:56:17.329509 4834 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/46ef0752-abe1-465f-8b0b-77906b861c12/volumes/kubernetes.io~secret/public-tls-certs Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.329605 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "46ef0752-abe1-465f-8b0b-77906b861c12" (UID: "46ef0752-abe1-465f-8b0b-77906b861c12"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.330100 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.330117 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.330126 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ef0752-abe1-465f-8b0b-77906b861c12-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347151 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347206 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1e5f-account-create-update-sm488" event={"ID":"9effd323-5ae3-4ed8-a2e0-4659dd231bbd","Type":"ContainerDied","Data":"169b3fe624ac21635f273cad7f3e8ffc2ca34497db222d84f7c2641428255804"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347230 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1e5f-account-create-update-sm488"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347246 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"61306868-aa06-4574-a568-b36b22fd6db6","Type":"ContainerDied","Data":"537c85e6c54e2bb1768b64164ebd5a9c9403e16796d84252d7cce9d6047e6d4f"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347260 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" event={"ID":"4e4b8c88-31ca-4212-939c-9e163ff6af52","Type":"ContainerDied","Data":"19ad5e1d36dc4cca92dc2d55710cdc46c1e446991cfdd3c7f69a92b2d721c0f8"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347276 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"615eb241-8fa5-4c76-b710-19a3bd65e0ac","Type":"ContainerDied","Data":"85378de999d4b746f726db38c15173ce2ec1d4f11248e24fa1ad5e385ff441aa"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347294 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2","Type":"ContainerDied","Data":"4cdf01d893881ac1c03282850be08114203e3678d346c96a0db7e061046cf925"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347311 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1e5f-account-create-update-sm488"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347441 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10412867-64ac-413b-8f2f-9bdac2bb8759","Type":"ContainerDied","Data":"008e24c6de4a7972abcecfe67f07648df9fa7fff4e253229a970cbbe16f3e832"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347477 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1045-account-create-update-7ssd5" event={"ID":"28da9710-d30d-4fe5-ab02-aadd9b32ab1e","Type":"ContainerDied","Data":"6077beabe0839a0b10e944998fdc96f749ea64c0c4635de3e7066861e44d308c"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347597 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxzqd" event={"ID":"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f","Type":"ContainerDied","Data":"f5c669d4fc60c3fe1ef25f2b49cb0ecb5679b1b1b63b75418dcd0684cfa0bca4"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347624 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69bb684bc8-6s7qv" event={"ID":"46ef0752-abe1-465f-8b0b-77906b861c12","Type":"ContainerDied","Data":"0b54aa54647967f433ead26be60f5ecca8009300967e54ee0684394f6f2cdd0b"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347638 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f53e7c29-7c71-4dba-8c9f-2a9accc74294","Type":"ContainerDied","Data":"831f7e80b2dd51f5fba704c0975d18b653c249edfc68646cb8a57e93f78ca51e"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347651 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc05-account-create-update-sxz6w" event={"ID":"093c745f-d0bf-4c8e-aceb-c40d42ad2ae5","Type":"ContainerDied","Data":"82484a5e786c32c515fbbb7fbe3adb4a879e90b3531cc6bd3bb7afff5a7da4bb"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347663 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a471c86e-9e4a-4aba-848a-75aefa12c239","Type":"ContainerDied","Data":"2ebbbaf00d9a369dd1ca5f46d2f165bdb5bf4264e991c134e7c3cd8817356d6f"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.347677 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9ca3-account-create-update-prg2p" event={"ID":"8f811e2a-291e-401d-9b0f-32146bad80ac","Type":"ContainerDied","Data":"c69e537756732cd275d2dad2fc022945cc928c987eb5958b32759ce5d87c47b4"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.348346 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.348432 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" gracePeriod=600 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.367394 4834 scope.go:117] "RemoveContainer" containerID="f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:17.367766 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5\": container with ID starting with f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5 not found: ID does not exist" containerID="f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.367796 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5"} err="failed to get container status \"f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5\": rpc error: code = NotFound desc = could not find container \"f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5\": container with ID starting with f8b0b6830c6aa03e24d16a4946ed892962b6c68f7f5e3956d16e14636199b8f5 not found: ID does not exist" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.367819 4834 scope.go:117] "RemoveContainer" containerID="ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:17.368127 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49\": container with ID starting with ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49 not found: ID does not exist" containerID="ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.368145 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49"} err="failed to get container status \"ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49\": rpc error: code = NotFound desc = could not find container \"ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49\": container with ID starting with ba98f8cb2c94bb84242bcc2a0f94f19d0eca48af12bf6a94c8f0c378b5b74a49 not found: ID does not exist" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.368161 4834 scope.go:117] "RemoveContainer" containerID="537c85e6c54e2bb1768b64164ebd5a9c9403e16796d84252d7cce9d6047e6d4f" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.381043 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.413861 4834 scope.go:117] "RemoveContainer" containerID="0b54aa54647967f433ead26be60f5ecca8009300967e54ee0684394f6f2cdd0b" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.418181 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.431208 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-httpd-run\") pod \"a471c86e-9e4a-4aba-848a-75aefa12c239\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.431286 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-combined-ca-bundle\") pod \"a471c86e-9e4a-4aba-848a-75aefa12c239\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.431331 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-logs\") pod \"a471c86e-9e4a-4aba-848a-75aefa12c239\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.431424 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptsnx\" (UniqueName: \"kubernetes.io/projected/a471c86e-9e4a-4aba-848a-75aefa12c239-kube-api-access-ptsnx\") pod \"a471c86e-9e4a-4aba-848a-75aefa12c239\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.431449 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-public-tls-certs\") pod \"a471c86e-9e4a-4aba-848a-75aefa12c239\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.431474 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a471c86e-9e4a-4aba-848a-75aefa12c239\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.431651 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-scripts\") pod \"a471c86e-9e4a-4aba-848a-75aefa12c239\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.431681 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-config-data\") pod \"a471c86e-9e4a-4aba-848a-75aefa12c239\" (UID: \"a471c86e-9e4a-4aba-848a-75aefa12c239\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.433870 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a471c86e-9e4a-4aba-848a-75aefa12c239" (UID: "a471c86e-9e4a-4aba-848a-75aefa12c239"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.433984 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-logs" (OuterVolumeSpecName: "logs") pod "a471c86e-9e4a-4aba-848a-75aefa12c239" (UID: "a471c86e-9e4a-4aba-848a-75aefa12c239"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.435884 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.436107 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.437869 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.444126 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "a471c86e-9e4a-4aba-848a-75aefa12c239" (UID: "a471c86e-9e4a-4aba-848a-75aefa12c239"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.457338 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a471c86e-9e4a-4aba-848a-75aefa12c239-kube-api-access-ptsnx" (OuterVolumeSpecName: "kube-api-access-ptsnx") pod "a471c86e-9e4a-4aba-848a-75aefa12c239" (UID: "a471c86e-9e4a-4aba-848a-75aefa12c239"). InnerVolumeSpecName "kube-api-access-ptsnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.467113 4834 scope.go:117] "RemoveContainer" containerID="637192b126c7873039ac5b1bfa43915a4f297b8c2c14190ae08138d21215676f" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.490125 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-scripts" (OuterVolumeSpecName: "scripts") pod "a471c86e-9e4a-4aba-848a-75aefa12c239" (UID: "a471c86e-9e4a-4aba-848a-75aefa12c239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.499407 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-config-data" (OuterVolumeSpecName: "config-data") pod "a471c86e-9e4a-4aba-848a-75aefa12c239" (UID: "a471c86e-9e4a-4aba-848a-75aefa12c239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.502334 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.523094 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.523377 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a471c86e-9e4a-4aba-848a-75aefa12c239" (UID: "a471c86e-9e4a-4aba-848a-75aefa12c239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.530042 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1045-account-create-update-7ssd5"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533436 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-public-tls-certs\") pod \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533482 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-scripts\") pod \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533517 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data-custom\") pod \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533581 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f53e7c29-7c71-4dba-8c9f-2a9accc74294-etc-machine-id\") pod \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533623 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-public-tls-certs\") pod \"10412867-64ac-413b-8f2f-9bdac2bb8759\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533692 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data\") pod \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533727 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh7nm\" (UniqueName: \"kubernetes.io/projected/f53e7c29-7c71-4dba-8c9f-2a9accc74294-kube-api-access-dh7nm\") pod \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533749 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-internal-tls-certs\") pod \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533804 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10412867-64ac-413b-8f2f-9bdac2bb8759-logs\") pod \"10412867-64ac-413b-8f2f-9bdac2bb8759\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533850 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-internal-tls-certs\") pod \"10412867-64ac-413b-8f2f-9bdac2bb8759\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.533964 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-combined-ca-bundle\") pod \"10412867-64ac-413b-8f2f-9bdac2bb8759\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534004 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-config-data\") pod \"10412867-64ac-413b-8f2f-9bdac2bb8759\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534047 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53e7c29-7c71-4dba-8c9f-2a9accc74294-logs\") pod \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534076 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-combined-ca-bundle\") pod \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\" (UID: \"f53e7c29-7c71-4dba-8c9f-2a9accc74294\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534149 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcslj\" (UniqueName: \"kubernetes.io/projected/10412867-64ac-413b-8f2f-9bdac2bb8759-kube-api-access-bcslj\") pod \"10412867-64ac-413b-8f2f-9bdac2bb8759\" (UID: \"10412867-64ac-413b-8f2f-9bdac2bb8759\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534699 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534715 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptsnx\" (UniqueName: \"kubernetes.io/projected/a471c86e-9e4a-4aba-848a-75aefa12c239-kube-api-access-ptsnx\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534740 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534749 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534759 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534768 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a471c86e-9e4a-4aba-848a-75aefa12c239-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.534777 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.535895 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f53e7c29-7c71-4dba-8c9f-2a9accc74294-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f53e7c29-7c71-4dba-8c9f-2a9accc74294" (UID: "f53e7c29-7c71-4dba-8c9f-2a9accc74294"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.536005 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10412867-64ac-413b-8f2f-9bdac2bb8759-logs" (OuterVolumeSpecName: "logs") pod "10412867-64ac-413b-8f2f-9bdac2bb8759" (UID: "10412867-64ac-413b-8f2f-9bdac2bb8759"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.545993 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a471c86e-9e4a-4aba-848a-75aefa12c239" (UID: "a471c86e-9e4a-4aba-848a-75aefa12c239"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.546129 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1045-account-create-update-7ssd5"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.546631 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f53e7c29-7c71-4dba-8c9f-2a9accc74294-logs" (OuterVolumeSpecName: "logs") pod "f53e7c29-7c71-4dba-8c9f-2a9accc74294" (UID: "f53e7c29-7c71-4dba-8c9f-2a9accc74294"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.548747 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f53e7c29-7c71-4dba-8c9f-2a9accc74294" (UID: "f53e7c29-7c71-4dba-8c9f-2a9accc74294"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.553544 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.561729 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53e7c29-7c71-4dba-8c9f-2a9accc74294-kube-api-access-dh7nm" (OuterVolumeSpecName: "kube-api-access-dh7nm") pod "f53e7c29-7c71-4dba-8c9f-2a9accc74294" (UID: "f53e7c29-7c71-4dba-8c9f-2a9accc74294"). InnerVolumeSpecName "kube-api-access-dh7nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.565116 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.578631 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bc05-account-create-update-sxz6w"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.579175 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10412867-64ac-413b-8f2f-9bdac2bb8759-kube-api-access-bcslj" (OuterVolumeSpecName: "kube-api-access-bcslj") pod "10412867-64ac-413b-8f2f-9bdac2bb8759" (UID: "10412867-64ac-413b-8f2f-9bdac2bb8759"). InnerVolumeSpecName "kube-api-access-bcslj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.588039 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bc05-account-create-update-sxz6w"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.588304 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-scripts" (OuterVolumeSpecName: "scripts") pod "f53e7c29-7c71-4dba-8c9f-2a9accc74294" (UID: "f53e7c29-7c71-4dba-8c9f-2a9accc74294"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.597432 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.611771 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.613174 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-config-data" (OuterVolumeSpecName: "config-data") pod "10412867-64ac-413b-8f2f-9bdac2bb8759" (UID: "10412867-64ac-413b-8f2f-9bdac2bb8759"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.613259 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10412867-64ac-413b-8f2f-9bdac2bb8759" (UID: "10412867-64ac-413b-8f2f-9bdac2bb8759"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.619771 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.636770 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-httpd-run\") pod \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.636857 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz8rp\" (UniqueName: \"kubernetes.io/projected/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-kube-api-access-wz8rp\") pod \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.637118 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-combined-ca-bundle\") pod \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.637150 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-config-data\") pod \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.637301 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-scripts\") pod \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.637366 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-logs\") pod \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.637388 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.637427 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-internal-tls-certs\") pod \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\" (UID: \"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638174 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638192 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f53e7c29-7c71-4dba-8c9f-2a9accc74294-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638202 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcslj\" (UniqueName: \"kubernetes.io/projected/10412867-64ac-413b-8f2f-9bdac2bb8759-kube-api-access-bcslj\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638213 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638223 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638231 4834 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f53e7c29-7c71-4dba-8c9f-2a9accc74294-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638243 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh7nm\" (UniqueName: \"kubernetes.io/projected/f53e7c29-7c71-4dba-8c9f-2a9accc74294-kube-api-access-dh7nm\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638252 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10412867-64ac-413b-8f2f-9bdac2bb8759-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638263 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a471c86e-9e4a-4aba-848a-75aefa12c239-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638271 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.638280 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.639886 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-logs" (OuterVolumeSpecName: "logs") pod "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" (UID: "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.639941 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" (UID: "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.645793 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" (UID: "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.645873 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "10412867-64ac-413b-8f2f-9bdac2bb8759" (UID: "10412867-64ac-413b-8f2f-9bdac2bb8759"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.658513 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-kube-api-access-wz8rp" (OuterVolumeSpecName: "kube-api-access-wz8rp") pod "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" (UID: "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2"). InnerVolumeSpecName "kube-api-access-wz8rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.659549 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-scripts" (OuterVolumeSpecName: "scripts") pod "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" (UID: "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.670350 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data" (OuterVolumeSpecName: "config-data") pod "f53e7c29-7c71-4dba-8c9f-2a9accc74294" (UID: "f53e7c29-7c71-4dba-8c9f-2a9accc74294"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.694849 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69bb684bc8-6s7qv"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.699618 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f53e7c29-7c71-4dba-8c9f-2a9accc74294" (UID: "f53e7c29-7c71-4dba-8c9f-2a9accc74294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.718111 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-69bb684bc8-6s7qv"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.732882 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "10412867-64ac-413b-8f2f-9bdac2bb8759" (UID: "10412867-64ac-413b-8f2f-9bdac2bb8759"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.737602 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f53e7c29-7c71-4dba-8c9f-2a9accc74294" (UID: "f53e7c29-7c71-4dba-8c9f-2a9accc74294"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:17.738427 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739448 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz8rp\" (UniqueName: \"kubernetes.io/projected/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-kube-api-access-wz8rp\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739463 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739473 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739491 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739499 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10412867-64ac-413b-8f2f-9bdac2bb8759-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739507 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739515 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739524 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739541 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.739550 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:17.739909 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:17.740033 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data podName:b87b73b4-2715-4ce7-81b3-df0c1f57922f nodeName:}" failed. No retries permitted until 2026-01-21 14:56:25.740011382 +0000 UTC m=+1531.714360427 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data") pod "rabbitmq-cell1-server-0" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f") : configmap "rabbitmq-cell1-config-data" not found Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.745829 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.759582 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.768212 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.769102 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" (UID: "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.770057 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f53e7c29-7c71-4dba-8c9f-2a9accc74294" (UID: "f53e7c29-7c71-4dba-8c9f-2a9accc74294"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.774946 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9ca3-account-create-update-prg2p"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.779123 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-config-data" (OuterVolumeSpecName: "config-data") pod "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" (UID: "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.794368 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9ca3-account-create-update-prg2p"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.807691 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" (UID: "08e8a6dd-5bbd-4f91-9860-2b3146ba47a2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.833895 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.834254 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="ceilometer-central-agent" containerID="cri-o://19c506fe91677f38040b6ac9abfe39213d5919b09eeccd51a69bcdebc4c4dc90" gracePeriod=30 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.834329 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="proxy-httpd" containerID="cri-o://4c978202772d336f5ede7c461a6d979b857f71449d109c11375fecceb91b59eb" gracePeriod=30 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.834375 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="sg-core" containerID="cri-o://cdd9893db2d5fb4b6116c9182d5161ce755afa8f174d7fe35f8d4c29d5c0807e" gracePeriod=30 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.834412 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="ceilometer-notification-agent" containerID="cri-o://107a731e7f2664d7f9aa9d644b794b0125c9dda7913148f699b179b99aae879d" gracePeriod=30 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.842995 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.843039 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.843054 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f53e7c29-7c71-4dba-8c9f-2a9accc74294-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.843066 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.843077 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.858036 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.861955 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c3c79dc8-1d60-46f6-add1-1783486562f2" containerName="kube-state-metrics" containerID="cri-o://741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683" gracePeriod=30 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.888281 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.929523 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.942031 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08e8a6dd-5bbd-4f91-9860-2b3146ba47a2","Type":"ContainerDied","Data":"819bd9c965bf2ba263ce309dcf46a6a548468ca24472b2286bf426740d233239"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.942080 4834 scope.go:117] "RemoveContainer" containerID="4cdf01d893881ac1c03282850be08114203e3678d346c96a0db7e061046cf925" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.942174 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.944610 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-config-data\") pod \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.944650 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-nova-metadata-tls-certs\") pod \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.944765 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/615eb241-8fa5-4c76-b710-19a3bd65e0ac-logs\") pod \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.944877 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk6r6\" (UniqueName: \"kubernetes.io/projected/615eb241-8fa5-4c76-b710-19a3bd65e0ac-kube-api-access-fk6r6\") pod \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.944903 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-combined-ca-bundle\") pod \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\" (UID: \"615eb241-8fa5-4c76-b710-19a3bd65e0ac\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.969851 4834 generic.go:334] "Generic (PLEG): container finished" podID="84309501-c399-4d83-9876-00b58ba67b0d" containerID="b0fceacb1c47e8063d1e7c22118f445dc2ab5d3565c665ac3bb3f6be80d3738d" exitCode=0 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.969918 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" event={"ID":"84309501-c399-4d83-9876-00b58ba67b0d","Type":"ContainerDied","Data":"b0fceacb1c47e8063d1e7c22118f445dc2ab5d3565c665ac3bb3f6be80d3738d"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.982374 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/615eb241-8fa5-4c76-b710-19a3bd65e0ac-logs" (OuterVolumeSpecName: "logs") pod "615eb241-8fa5-4c76-b710-19a3bd65e0ac" (UID: "615eb241-8fa5-4c76-b710-19a3bd65e0ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.996900 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f53e7c29-7c71-4dba-8c9f-2a9accc74294","Type":"ContainerDied","Data":"1723cbb8157acedc880b5d7714938ee819e163f28121f576b90009526b63bcea"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:17.997291 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.030846 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a471c86e-9e4a-4aba-848a-75aefa12c239","Type":"ContainerDied","Data":"3ce00cf2fad839def106c12c9fffdc46329ab8ef0cb444963c763fe9c17ff6dc"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.031094 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.057120 4834 generic.go:334] "Generic (PLEG): container finished" podID="07ff4f13-b754-4f82-accc-54ed420dce2e" containerID="9cc4e607542ea15206bb43978347800db816a791a94de96842f843bffbc0cd73" exitCode=0 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.057284 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"07ff4f13-b754-4f82-accc-54ed420dce2e","Type":"ContainerDied","Data":"9cc4e607542ea15206bb43978347800db816a791a94de96842f843bffbc0cd73"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.090520 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/615eb241-8fa5-4c76-b710-19a3bd65e0ac-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.091405 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"615eb241-8fa5-4c76-b710-19a3bd65e0ac","Type":"ContainerDied","Data":"2b16fa379e52ce431a6c58ef3562518a3af3479928cd05e053243b7049a84491"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.091530 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.119594 4834 generic.go:334] "Generic (PLEG): container finished" podID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" containerID="5c65389c531f75f514e8fc9b44607b68a6c8d3ed7db20c8c4d79dd6242ee95ab" exitCode=1 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.119714 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxzqd" event={"ID":"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f","Type":"ContainerDied","Data":"5c65389c531f75f514e8fc9b44607b68a6c8d3ed7db20c8c4d79dd6242ee95ab"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.156709 4834 scope.go:117] "RemoveContainer" containerID="b12a6f3114054a9898093e473e21761dd4f9e13b5dde59cc5ef5df0092d0285d" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.157021 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56587d777c-2rx88" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.181411 4834 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-fxzqd" secret="" err="secret \"galera-openstack-dockercfg-qdqqv\" not found" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.181461 4834 scope.go:117] "RemoveContainer" containerID="5c65389c531f75f514e8fc9b44607b68a6c8d3ed7db20c8c4d79dd6242ee95ab" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.184897 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-fxzqd_openstack(2977cc62-5ade-40e1-b2ba-1bfe044d2f0f)\"" pod="openstack/root-account-create-update-fxzqd" podUID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.193403 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data-custom\") pod \"4e4b8c88-31ca-4212-939c-9e163ff6af52\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.193484 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-combined-ca-bundle\") pod \"4e4b8c88-31ca-4212-939c-9e163ff6af52\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.193517 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/4e4b8c88-31ca-4212-939c-9e163ff6af52-kube-api-access-nfmmt\") pod \"4e4b8c88-31ca-4212-939c-9e163ff6af52\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.193694 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data\") pod \"4e4b8c88-31ca-4212-939c-9e163ff6af52\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.193728 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-internal-tls-certs\") pod \"4e4b8c88-31ca-4212-939c-9e163ff6af52\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.193779 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-public-tls-certs\") pod \"4e4b8c88-31ca-4212-939c-9e163ff6af52\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.193839 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b8c88-31ca-4212-939c-9e163ff6af52-logs\") pod \"4e4b8c88-31ca-4212-939c-9e163ff6af52\" (UID: \"4e4b8c88-31ca-4212-939c-9e163ff6af52\") " Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.198437 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4b8c88-31ca-4212-939c-9e163ff6af52-logs" (OuterVolumeSpecName: "logs") pod "4e4b8c88-31ca-4212-939c-9e163ff6af52" (UID: "4e4b8c88-31ca-4212-939c-9e163ff6af52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.207105 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-23dc-account-create-update-w6z88"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.217607 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-config-data" (OuterVolumeSpecName: "config-data") pod "615eb241-8fa5-4c76-b710-19a3bd65e0ac" (UID: "615eb241-8fa5-4c76-b710-19a3bd65e0ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.221134 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4e4b8c88-31ca-4212-939c-9e163ff6af52" (UID: "4e4b8c88-31ca-4212-939c-9e163ff6af52"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.221251 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "615eb241-8fa5-4c76-b710-19a3bd65e0ac" (UID: "615eb241-8fa5-4c76-b710-19a3bd65e0ac"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.221398 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "615eb241-8fa5-4c76-b710-19a3bd65e0ac" (UID: "615eb241-8fa5-4c76-b710-19a3bd65e0ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.225699 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4b8c88-31ca-4212-939c-9e163ff6af52-kube-api-access-nfmmt" (OuterVolumeSpecName: "kube-api-access-nfmmt") pod "4e4b8c88-31ca-4212-939c-9e163ff6af52" (UID: "4e4b8c88-31ca-4212-939c-9e163ff6af52"). InnerVolumeSpecName "kube-api-access-nfmmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.230063 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615eb241-8fa5-4c76-b710-19a3bd65e0ac-kube-api-access-fk6r6" (OuterVolumeSpecName: "kube-api-access-fk6r6") pod "615eb241-8fa5-4c76-b710-19a3bd65e0ac" (UID: "615eb241-8fa5-4c76-b710-19a3bd65e0ac"). InnerVolumeSpecName "kube-api-access-fk6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.234532 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" exitCode=0 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.234680 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.249666 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.250794 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.254511 4834 generic.go:334] "Generic (PLEG): container finished" podID="e9503bd6-1084-408a-8e1d-65d66dab4170" containerID="8d770e00807ea81d0baadc2453ac237e13a01ff3a0bba99f60d58f1c729e3861" exitCode=0 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.254679 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9503bd6-1084-408a-8e1d-65d66dab4170","Type":"ContainerDied","Data":"8d770e00807ea81d0baadc2453ac237e13a01ff3a0bba99f60d58f1c729e3861"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.260384 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e4b8c88-31ca-4212-939c-9e163ff6af52" (UID: "4e4b8c88-31ca-4212-939c-9e163ff6af52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.276178 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data" (OuterVolumeSpecName: "config-data") pod "4e4b8c88-31ca-4212-939c-9e163ff6af52" (UID: "4e4b8c88-31ca-4212-939c-9e163ff6af52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.277541 4834 generic.go:334] "Generic (PLEG): container finished" podID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerID="52bc71d8458ec442f5c03fd42e72e4aee5dfcd1add1d1cd4d9a3fbb418f22b46" exitCode=0 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.277781 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" event={"ID":"67cf94c8-2d73-4940-a873-775f2cba8ce5","Type":"ContainerDied","Data":"52bc71d8458ec442f5c03fd42e72e4aee5dfcd1add1d1cd4d9a3fbb418f22b46"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.280184 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" event={"ID":"4e4b8c88-31ca-4212-939c-9e163ff6af52","Type":"ContainerDied","Data":"c5f1cf2cb5e4e774781c7c94bb7c3dd49bca527b68353a9f286936f373836cbe"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.280298 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6f6cfdb85b-jvqw8" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.287757 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4e4b8c88-31ca-4212-939c-9e163ff6af52" (UID: "4e4b8c88-31ca-4212-939c-9e163ff6af52"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.291147 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.291136 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10412867-64ac-413b-8f2f-9bdac2bb8759","Type":"ContainerDied","Data":"ecfb1fda24285175bc271aaaee2c4805656dbdbfade0205a6a7fcb99123e2a05"} Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304009 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304034 4834 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304048 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304057 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304066 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b8c88-31ca-4212-939c-9e163ff6af52-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304075 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk6r6\" (UniqueName: \"kubernetes.io/projected/615eb241-8fa5-4c76-b710-19a3bd65e0ac-kube-api-access-fk6r6\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304136 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/615eb241-8fa5-4c76-b710-19a3bd65e0ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304147 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304157 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.304168 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/4e4b8c88-31ca-4212-939c-9e163ff6af52-kube-api-access-nfmmt\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.305562 4834 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.305704 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts podName:2977cc62-5ade-40e1-b2ba-1bfe044d2f0f nodeName:}" failed. No retries permitted until 2026-01-21 14:56:18.805672752 +0000 UTC m=+1524.780021977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts") pod "root-account-create-update-fxzqd" (UID: "2977cc62-5ade-40e1-b2ba-1bfe044d2f0f") : configmap "openstack-scripts" not found Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.310273 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-23dc-account-create-update-w6z88"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.322193 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4e4b8c88-31ca-4212-939c-9e163ff6af52" (UID: "4e4b8c88-31ca-4212-939c-9e163ff6af52"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.347988 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="093c745f-d0bf-4c8e-aceb-c40d42ad2ae5" path="/var/lib/kubelet/pods/093c745f-d0bf-4c8e-aceb-c40d42ad2ae5/volumes" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.348404 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15dc9d10-a46a-4fec-b061-2e72caace933" path="/var/lib/kubelet/pods/15dc9d10-a46a-4fec-b061-2e72caace933/volumes" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.348947 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e74faea-a792-455c-a253-7012f98c6acf" path="/var/lib/kubelet/pods/1e74faea-a792-455c-a253-7012f98c6acf/volumes" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.349748 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28da9710-d30d-4fe5-ab02-aadd9b32ab1e" path="/var/lib/kubelet/pods/28da9710-d30d-4fe5-ab02-aadd9b32ab1e/volumes" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.350772 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" path="/var/lib/kubelet/pods/46ef0752-abe1-465f-8b0b-77906b861c12/volumes" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.351583 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61306868-aa06-4574-a568-b36b22fd6db6" path="/var/lib/kubelet/pods/61306868-aa06-4574-a568-b36b22fd6db6/volumes" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.352118 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f811e2a-291e-401d-9b0f-32146bad80ac" path="/var/lib/kubelet/pods/8f811e2a-291e-401d-9b0f-32146bad80ac/volumes" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.352461 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9effd323-5ae3-4ed8-a2e0-4659dd231bbd" path="/var/lib/kubelet/pods/9effd323-5ae3-4ed8-a2e0-4659dd231bbd/volumes" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.354891 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f1661a-e972-4a56-bf7c-75e6f605a4c9" path="/var/lib/kubelet/pods/a5f1661a-e972-4a56-bf7c-75e6f605a4c9/volumes" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357276 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357318 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-23dc-account-create-update-2fqjh"] Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357692 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357706 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357722 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-server" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357728 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-server" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357741 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerName="cinder-api-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357747 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerName="cinder-api-log" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357763 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerName="cinder-api" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357769 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerName="cinder-api" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357776 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerName="glance-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357783 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerName="glance-log" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357796 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357802 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-log" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357811 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerName="glance-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357817 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerName="glance-log" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357835 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-api" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357841 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-api" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357851 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61306868-aa06-4574-a568-b36b22fd6db6" containerName="nova-cell0-conductor-conductor" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357857 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="61306868-aa06-4574-a568-b36b22fd6db6" containerName="nova-cell0-conductor-conductor" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357865 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-metadata" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357872 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-metadata" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357878 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357886 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-log" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357901 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerName="glance-httpd" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357906 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerName="glance-httpd" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357920 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357943 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api-log" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357953 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerName="glance-httpd" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357959 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerName="glance-httpd" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357968 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-api" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357974 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-api" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357982 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-httpd" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.357990 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-httpd" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.357997 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358005 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358187 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerName="cinder-api" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358202 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358217 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-api" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358226 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-httpd" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358234 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-api" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358244 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-metadata" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358263 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358273 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" containerName="barbican-api-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358285 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" containerName="nova-api-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358293 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" containerName="nova-metadata-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358300 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerName="glance-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358312 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="61306868-aa06-4574-a568-b36b22fd6db6" containerName="nova-cell0-conductor-conductor" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358320 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a471c86e-9e4a-4aba-848a-75aefa12c239" containerName="glance-httpd" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358330 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerName="glance-httpd" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358341 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" containerName="glance-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358352 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="725f14ad-f7a0-4d41-813e-19161c405300" containerName="proxy-server" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.358365 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" containerName="cinder-api-log" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.360177 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-23dc-account-create-update-2fqjh"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.360291 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.360412 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="234831ee-247b-40ae-9c71-db9d7b45d275" containerName="memcached" containerID="cri-o://7d56f8927a0e745e7f1e135ef6228427a467fa5229a8d9c26e035ff6e772686c" gracePeriod=30 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.363535 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gvgk7"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.363649 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.367319 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.374376 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-zbc8d"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.403995 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gvgk7"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.407319 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b8c88-31ca-4212-939c-9e163ff6af52-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.440753 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-zbc8d"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.467104 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-577b97bbf9-tdqtw"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.467616 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-577b97bbf9-tdqtw" podUID="2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" containerName="keystone-api" containerID="cri-o://0d518f3b26bced1d0a24c01a084dcae54af88409651e0fec0bea0e6b5762886a" gracePeriod=30 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.480620 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.496420 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-59fnb"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.510393 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpxg\" (UniqueName: \"kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg\") pod \"keystone-23dc-account-create-update-2fqjh\" (UID: \"5932a25a-318a-4c1d-9a10-3bd1d11125d7\") " pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.511043 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts\") pod \"keystone-23dc-account-create-update-2fqjh\" (UID: \"5932a25a-318a-4c1d-9a10-3bd1d11125d7\") " pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.521589 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-59fnb"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.616030 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpxg\" (UniqueName: \"kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg\") pod \"keystone-23dc-account-create-update-2fqjh\" (UID: \"5932a25a-318a-4c1d-9a10-3bd1d11125d7\") " pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.616111 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts\") pod \"keystone-23dc-account-create-update-2fqjh\" (UID: \"5932a25a-318a-4c1d-9a10-3bd1d11125d7\") " pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.616268 4834 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.616362 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts podName:5932a25a-318a-4c1d-9a10-3bd1d11125d7 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:19.116310493 +0000 UTC m=+1525.090659538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts") pod "keystone-23dc-account-create-update-2fqjh" (UID: "5932a25a-318a-4c1d-9a10-3bd1d11125d7") : configmap "openstack-scripts" not found Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.616763 4834 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.616800 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data podName:df9714a2-fadf-48a3-8b71-07d7419cc713 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:26.616789687 +0000 UTC m=+1532.591138722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data") pod "rabbitmq-server-0" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713") : configmap "rabbitmq-config-data" not found Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.622099 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-23dc-account-create-update-2fqjh"] Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.626341 4834 projected.go:194] Error preparing data for projected volume kube-api-access-kgpxg for pod openstack/keystone-23dc-account-create-update-2fqjh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.626797 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg podName:5932a25a-318a-4c1d-9a10-3bd1d11125d7 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:19.126771428 +0000 UTC m=+1525.101120473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kgpxg" (UniqueName: "kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg") pod "keystone-23dc-account-create-update-2fqjh" (UID: "5932a25a-318a-4c1d-9a10-3bd1d11125d7") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.641474 4834 scope.go:117] "RemoveContainer" containerID="831f7e80b2dd51f5fba704c0975d18b653c249edfc68646cb8a57e93f78ca51e" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.736748 4834 scope.go:117] "RemoveContainer" containerID="7939f32b333f3aaa2768f5185ab1a75ab0e7c265ea2a3d0ff3990714123a7f14" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.736920 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fxzqd"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.816017 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.821978 4834 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.822353 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts podName:2977cc62-5ade-40e1-b2ba-1bfe044d2f0f nodeName:}" failed. No retries permitted until 2026-01-21 14:56:19.822319676 +0000 UTC m=+1525.796668721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts") pod "root-account-create-update-fxzqd" (UID: "2977cc62-5ade-40e1-b2ba-1bfe044d2f0f") : configmap "openstack-scripts" not found Jan 21 14:56:18 crc kubenswrapper[4834]: E0121 14:56:18.835313 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kgpxg operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-23dc-account-create-update-2fqjh" podUID="5932a25a-318a-4c1d-9a10-3bd1d11125d7" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.842609 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.854377 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.863646 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.876288 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.888654 4834 scope.go:117] "RemoveContainer" containerID="2ebbbaf00d9a369dd1ca5f46d2f165bdb5bf4264e991c134e7c3cd8817356d6f" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.888860 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.905331 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.920345 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.929207 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-56587d777c-2rx88"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.933187 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" containerName="galera" containerID="cri-o://c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e" gracePeriod=30 Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.937093 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-56587d777c-2rx88"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.946780 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.959604 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.962364 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.965399 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.976107 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:56:18 crc kubenswrapper[4834]: I0121 14:56:18.991560 4834 scope.go:117] "RemoveContainer" containerID="42b08d33f6e569457d53d7d0fb1dde4b71a0fbe929495ac2886bed4d36c39ea2" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.047472 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data-custom\") pod \"67cf94c8-2d73-4940-a873-775f2cba8ce5\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.047569 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data-custom\") pod \"84309501-c399-4d83-9876-00b58ba67b0d\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.047640 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-combined-ca-bundle\") pod \"07ff4f13-b754-4f82-accc-54ed420dce2e\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.047693 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cf94c8-2d73-4940-a873-775f2cba8ce5-logs\") pod \"67cf94c8-2d73-4940-a873-775f2cba8ce5\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.047722 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bzw6\" (UniqueName: \"kubernetes.io/projected/07ff4f13-b754-4f82-accc-54ed420dce2e-kube-api-access-4bzw6\") pod \"07ff4f13-b754-4f82-accc-54ed420dce2e\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.051104 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfk5m\" (UniqueName: \"kubernetes.io/projected/67cf94c8-2d73-4940-a873-775f2cba8ce5-kube-api-access-wfk5m\") pod \"67cf94c8-2d73-4940-a873-775f2cba8ce5\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.051161 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84309501-c399-4d83-9876-00b58ba67b0d-logs\") pod \"84309501-c399-4d83-9876-00b58ba67b0d\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.051218 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-combined-ca-bundle\") pod \"67cf94c8-2d73-4940-a873-775f2cba8ce5\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.051291 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data\") pod \"84309501-c399-4d83-9876-00b58ba67b0d\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.051356 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpjd8\" (UniqueName: \"kubernetes.io/projected/84309501-c399-4d83-9876-00b58ba67b0d-kube-api-access-zpjd8\") pod \"84309501-c399-4d83-9876-00b58ba67b0d\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.051381 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-combined-ca-bundle\") pod \"84309501-c399-4d83-9876-00b58ba67b0d\" (UID: \"84309501-c399-4d83-9876-00b58ba67b0d\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.051435 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-config-data\") pod \"07ff4f13-b754-4f82-accc-54ed420dce2e\" (UID: \"07ff4f13-b754-4f82-accc-54ed420dce2e\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.051474 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data\") pod \"67cf94c8-2d73-4940-a873-775f2cba8ce5\" (UID: \"67cf94c8-2d73-4940-a873-775f2cba8ce5\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.053079 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67cf94c8-2d73-4940-a873-775f2cba8ce5-logs" (OuterVolumeSpecName: "logs") pod "67cf94c8-2d73-4940-a873-775f2cba8ce5" (UID: "67cf94c8-2d73-4940-a873-775f2cba8ce5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.053485 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84309501-c399-4d83-9876-00b58ba67b0d-logs" (OuterVolumeSpecName: "logs") pod "84309501-c399-4d83-9876-00b58ba67b0d" (UID: "84309501-c399-4d83-9876-00b58ba67b0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.059632 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "67cf94c8-2d73-4940-a873-775f2cba8ce5" (UID: "67cf94c8-2d73-4940-a873-775f2cba8ce5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.060878 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6f6cfdb85b-jvqw8"] Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.072304 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84309501-c399-4d83-9876-00b58ba67b0d-kube-api-access-zpjd8" (OuterVolumeSpecName: "kube-api-access-zpjd8") pod "84309501-c399-4d83-9876-00b58ba67b0d" (UID: "84309501-c399-4d83-9876-00b58ba67b0d"). InnerVolumeSpecName "kube-api-access-zpjd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.075381 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.075450 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67cf94c8-2d73-4940-a873-775f2cba8ce5-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.075465 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84309501-c399-4d83-9876-00b58ba67b0d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.075486 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpjd8\" (UniqueName: \"kubernetes.io/projected/84309501-c399-4d83-9876-00b58ba67b0d-kube-api-access-zpjd8\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.078821 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "84309501-c399-4d83-9876-00b58ba67b0d" (UID: "84309501-c399-4d83-9876-00b58ba67b0d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.082950 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ff4f13-b754-4f82-accc-54ed420dce2e-kube-api-access-4bzw6" (OuterVolumeSpecName: "kube-api-access-4bzw6") pod "07ff4f13-b754-4f82-accc-54ed420dce2e" (UID: "07ff4f13-b754-4f82-accc-54ed420dce2e"). InnerVolumeSpecName "kube-api-access-4bzw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.088798 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cf94c8-2d73-4940-a873-775f2cba8ce5-kube-api-access-wfk5m" (OuterVolumeSpecName: "kube-api-access-wfk5m") pod "67cf94c8-2d73-4940-a873-775f2cba8ce5" (UID: "67cf94c8-2d73-4940-a873-775f2cba8ce5"). InnerVolumeSpecName "kube-api-access-wfk5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.113384 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6f6cfdb85b-jvqw8"] Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.121804 4834 scope.go:117] "RemoveContainer" containerID="85378de999d4b746f726db38c15173ce2ec1d4f11248e24fa1ad5e385ff441aa" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.147652 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67cf94c8-2d73-4940-a873-775f2cba8ce5" (UID: "67cf94c8-2d73-4940-a873-775f2cba8ce5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.177496 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpxg\" (UniqueName: \"kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg\") pod \"keystone-23dc-account-create-update-2fqjh\" (UID: \"5932a25a-318a-4c1d-9a10-3bd1d11125d7\") " pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.177573 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts\") pod \"keystone-23dc-account-create-update-2fqjh\" (UID: \"5932a25a-318a-4c1d-9a10-3bd1d11125d7\") " pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.178035 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07ff4f13-b754-4f82-accc-54ed420dce2e" (UID: "07ff4f13-b754-4f82-accc-54ed420dce2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.182256 4834 scope.go:117] "RemoveContainer" containerID="1c0cc5a6d21ee15bd2b37944b66a859d2a28a82ee9266a7e79976bf6ab2c55a8" Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.204067 4834 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.204127 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.204199 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts podName:5932a25a-318a-4c1d-9a10-3bd1d11125d7 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:20.204171044 +0000 UTC m=+1526.178520089 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts") pod "keystone-23dc-account-create-update-2fqjh" (UID: "5932a25a-318a-4c1d-9a10-3bd1d11125d7") : configmap "openstack-scripts" not found Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.204580 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bzw6\" (UniqueName: \"kubernetes.io/projected/07ff4f13-b754-4f82-accc-54ed420dce2e-kube-api-access-4bzw6\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.204602 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfk5m\" (UniqueName: \"kubernetes.io/projected/67cf94c8-2d73-4940-a873-775f2cba8ce5-kube-api-access-wfk5m\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.204615 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.210245 4834 projected.go:194] Error preparing data for projected volume kube-api-access-kgpxg for pod openstack/keystone-23dc-account-create-update-2fqjh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.210348 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg podName:5932a25a-318a-4c1d-9a10-3bd1d11125d7 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:20.210321215 +0000 UTC m=+1526.184670260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kgpxg" (UniqueName: "kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg") pod "keystone-23dc-account-create-update-2fqjh" (UID: "5932a25a-318a-4c1d-9a10-3bd1d11125d7") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.219220 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-config-data" (OuterVolumeSpecName: "config-data") pod "07ff4f13-b754-4f82-accc-54ed420dce2e" (UID: "07ff4f13-b754-4f82-accc-54ed420dce2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.222389 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data" (OuterVolumeSpecName: "config-data") pod "67cf94c8-2d73-4940-a873-775f2cba8ce5" (UID: "67cf94c8-2d73-4940-a873-775f2cba8ce5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.222741 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptknb"] Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.252264 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.256090 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data" (OuterVolumeSpecName: "config-data") pod "84309501-c399-4d83-9876-00b58ba67b0d" (UID: "84309501-c399-4d83-9876-00b58ba67b0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.280852 4834 scope.go:117] "RemoveContainer" containerID="f5c669d4fc60c3fe1ef25f2b49cb0ecb5679b1b1b63b75418dcd0684cfa0bca4" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.309248 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.309296 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.309310 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07ff4f13-b754-4f82-accc-54ed420dce2e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.309323 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67cf94c8-2d73-4940-a873-775f2cba8ce5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.317164 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84309501-c399-4d83-9876-00b58ba67b0d" (UID: "84309501-c399-4d83-9876-00b58ba67b0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.340232 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="234831ee-247b-40ae-9c71-db9d7b45d275" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: connect: connection refused" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.354495 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-87996dbdf-vzvsk" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.360566 4834 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.371101 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.371849 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" event={"ID":"67cf94c8-2d73-4940-a873-775f2cba8ce5","Type":"ContainerDied","Data":"6a5d422c1b706b8b694f3504637d43e6f9b38d6d5d9955fb7f5dcba94bd7480e"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.371968 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d5d49578b-z9xbl" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.394050 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9503bd6-1084-408a-8e1d-65d66dab4170","Type":"ContainerDied","Data":"4fcd2fdb75458ce100d6843a7ade251b39879b70b2374f74fb8e4bca6aff8087"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.394172 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.420633 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8dgr\" (UniqueName: \"kubernetes.io/projected/e9503bd6-1084-408a-8e1d-65d66dab4170-kube-api-access-d8dgr\") pod \"e9503bd6-1084-408a-8e1d-65d66dab4170\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.420748 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-combined-ca-bundle\") pod \"e9503bd6-1084-408a-8e1d-65d66dab4170\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.420981 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-config-data\") pod \"e9503bd6-1084-408a-8e1d-65d66dab4170\" (UID: \"e9503bd6-1084-408a-8e1d-65d66dab4170\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.421460 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84309501-c399-4d83-9876-00b58ba67b0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.439453 4834 scope.go:117] "RemoveContainer" containerID="6dba53c679c40632f6791fadae8f2aac4acf2c5613e03fc58ea292250e912986" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.461608 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9503bd6-1084-408a-8e1d-65d66dab4170-kube-api-access-d8dgr" (OuterVolumeSpecName: "kube-api-access-d8dgr") pod "e9503bd6-1084-408a-8e1d-65d66dab4170" (UID: "e9503bd6-1084-408a-8e1d-65d66dab4170"). InnerVolumeSpecName "kube-api-access-d8dgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.467419 4834 generic.go:334] "Generic (PLEG): container finished" podID="df9714a2-fadf-48a3-8b71-07d7419cc713" containerID="41b8202e62174a8eda17f1a9b9dd2a9295f09268d93892ad31cfad9446e70c71" exitCode=0 Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.467498 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df9714a2-fadf-48a3-8b71-07d7419cc713","Type":"ContainerDied","Data":"41b8202e62174a8eda17f1a9b9dd2a9295f09268d93892ad31cfad9446e70c71"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.472067 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9503bd6-1084-408a-8e1d-65d66dab4170" (UID: "e9503bd6-1084-408a-8e1d-65d66dab4170"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.473095 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.479171 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-config-data" (OuterVolumeSpecName: "config-data") pod "e9503bd6-1084-408a-8e1d-65d66dab4170" (UID: "e9503bd6-1084-408a-8e1d-65d66dab4170"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.482012 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5d5d49578b-z9xbl"] Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.488866 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5d5d49578b-z9xbl"] Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.506020 4834 generic.go:334] "Generic (PLEG): container finished" podID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerID="4c978202772d336f5ede7c461a6d979b857f71449d109c11375fecceb91b59eb" exitCode=0 Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.506349 4834 generic.go:334] "Generic (PLEG): container finished" podID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerID="cdd9893db2d5fb4b6116c9182d5161ce755afa8f174d7fe35f8d4c29d5c0807e" exitCode=2 Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.506423 4834 generic.go:334] "Generic (PLEG): container finished" podID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerID="19c506fe91677f38040b6ac9abfe39213d5919b09eeccd51a69bcdebc4c4dc90" exitCode=0 Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.506600 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerDied","Data":"4c978202772d336f5ede7c461a6d979b857f71449d109c11375fecceb91b59eb"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.506708 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerDied","Data":"cdd9893db2d5fb4b6116c9182d5161ce755afa8f174d7fe35f8d4c29d5c0807e"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.506802 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerDied","Data":"19c506fe91677f38040b6ac9abfe39213d5919b09eeccd51a69bcdebc4c4dc90"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.521397 4834 generic.go:334] "Generic (PLEG): container finished" podID="234831ee-247b-40ae-9c71-db9d7b45d275" containerID="7d56f8927a0e745e7f1e135ef6228427a467fa5229a8d9c26e035ff6e772686c" exitCode=0 Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.522517 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-tls\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.522579 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"234831ee-247b-40ae-9c71-db9d7b45d275","Type":"ContainerDied","Data":"7d56f8927a0e745e7f1e135ef6228427a467fa5229a8d9c26e035ff6e772686c"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.522720 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-erlang-cookie\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.522854 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-plugins-conf\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.523014 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmhw\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-kube-api-access-pdmhw\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.523117 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-server-conf\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.523213 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b87b73b4-2715-4ce7-81b3-df0c1f57922f-erlang-cookie-secret\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.523304 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.523427 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-plugins\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.523520 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.523629 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-confd\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.523789 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b87b73b4-2715-4ce7-81b3-df0c1f57922f-pod-info\") pod \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\" (UID: \"b87b73b4-2715-4ce7-81b3-df0c1f57922f\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.524497 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.524585 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8dgr\" (UniqueName: \"kubernetes.io/projected/e9503bd6-1084-408a-8e1d-65d66dab4170-kube-api-access-d8dgr\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.524664 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9503bd6-1084-408a-8e1d-65d66dab4170-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.527995 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.532065 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.535382 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.536007 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-kube-api-access-pdmhw" (OuterVolumeSpecName: "kube-api-access-pdmhw") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "kube-api-access-pdmhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.536050 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.536186 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87b73b4-2715-4ce7-81b3-df0c1f57922f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.536718 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" event={"ID":"84309501-c399-4d83-9876-00b58ba67b0d","Type":"ContainerDied","Data":"8efef12bdabeffe63c9009bb5dce3f5a5ad7667c43a921d22db342d8beb3c8f7"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.537350 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5696b4bbb9-8l4r8" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.546596 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b87b73b4-2715-4ce7-81b3-df0c1f57922f-pod-info" (OuterVolumeSpecName: "pod-info") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.547209 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"07ff4f13-b754-4f82-accc-54ed420dce2e","Type":"ContainerDied","Data":"8d8a63311e10d405f56c84fccafd5297ec905bcba33f374cf845ccc3f75e5d45"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.547306 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.559855 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.560731 4834 generic.go:334] "Generic (PLEG): container finished" podID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" containerID="74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf" exitCode=0 Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.560876 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.561103 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b87b73b4-2715-4ce7-81b3-df0c1f57922f","Type":"ContainerDied","Data":"74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.561175 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b87b73b4-2715-4ce7-81b3-df0c1f57922f","Type":"ContainerDied","Data":"61069d554ec42a69dcd479133fe906a0e908eeae4c7bb9655bfadcc270ae69ac"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.565258 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptknb" event={"ID":"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b","Type":"ContainerStarted","Data":"21c6cdd6527c5de19003e3fd0a135df6eaaa9e07c4191293c107b7ae1eb02afe"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.568026 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data" (OuterVolumeSpecName: "config-data") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.582242 4834 generic.go:334] "Generic (PLEG): container finished" podID="c3c79dc8-1d60-46f6-add1-1783486562f2" containerID="741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683" exitCode=2 Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.582366 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.582394 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c3c79dc8-1d60-46f6-add1-1783486562f2","Type":"ContainerDied","Data":"741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683"} Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.587813 4834 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-fxzqd" secret="" err="secret \"galera-openstack-dockercfg-qdqqv\" not found" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.587865 4834 scope.go:117] "RemoveContainer" containerID="5c65389c531f75f514e8fc9b44607b68a6c8d3ed7db20c8c4d79dd6242ee95ab" Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.588097 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-fxzqd_openstack(2977cc62-5ade-40e1-b2ba-1bfe044d2f0f)\"" pod="openstack/root-account-create-update-fxzqd" podUID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.588627 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.618047 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-server-conf" (OuterVolumeSpecName: "server-conf") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.626182 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-config\") pod \"c3c79dc8-1d60-46f6-add1-1783486562f2\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.626370 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-combined-ca-bundle\") pod \"c3c79dc8-1d60-46f6-add1-1783486562f2\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.626477 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-certs\") pod \"c3c79dc8-1d60-46f6-add1-1783486562f2\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.626509 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx2kf\" (UniqueName: \"kubernetes.io/projected/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-api-access-hx2kf\") pod \"c3c79dc8-1d60-46f6-add1-1783486562f2\" (UID: \"c3c79dc8-1d60-46f6-add1-1783486562f2\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627179 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627198 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627207 4834 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627235 4834 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627244 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdmhw\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-kube-api-access-pdmhw\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627252 4834 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b87b73b4-2715-4ce7-81b3-df0c1f57922f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627260 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b87b73b4-2715-4ce7-81b3-df0c1f57922f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627268 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627321 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.627332 4834 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b87b73b4-2715-4ce7-81b3-df0c1f57922f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.631551 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-api-access-hx2kf" (OuterVolumeSpecName: "kube-api-access-hx2kf") pod "c3c79dc8-1d60-46f6-add1-1783486562f2" (UID: "c3c79dc8-1d60-46f6-add1-1783486562f2"). InnerVolumeSpecName "kube-api-access-hx2kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.661690 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.668567 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2ffx"] Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.674307 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3c79dc8-1d60-46f6-add1-1783486562f2" (UID: "c3c79dc8-1d60-46f6-add1-1783486562f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.688092 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "c3c79dc8-1d60-46f6-add1-1783486562f2" (UID: "c3c79dc8-1d60-46f6-add1-1783486562f2"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.694305 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b87b73b4-2715-4ce7-81b3-df0c1f57922f" (UID: "b87b73b4-2715-4ce7-81b3-df0c1f57922f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.706558 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "c3c79dc8-1d60-46f6-add1-1783486562f2" (UID: "c3c79dc8-1d60-46f6-add1-1783486562f2"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.729000 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.729031 4834 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.729042 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx2kf\" (UniqueName: \"kubernetes.io/projected/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-api-access-hx2kf\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.729052 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.729061 4834 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c3c79dc8-1d60-46f6-add1-1783486562f2-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.729070 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b87b73b4-2715-4ce7-81b3-df0c1f57922f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.832381 4834 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.832499 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts podName:2977cc62-5ade-40e1-b2ba-1bfe044d2f0f nodeName:}" failed. No retries permitted until 2026-01-21 14:56:21.832466313 +0000 UTC m=+1527.806815538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts") pod "root-account-create-update-fxzqd" (UID: "2977cc62-5ade-40e1-b2ba-1bfe044d2f0f") : configmap "openstack-scripts" not found Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.835842 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.886325 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.901969 4834 scope.go:117] "RemoveContainer" containerID="19ad5e1d36dc4cca92dc2d55710cdc46c1e446991cfdd3c7f69a92b2d721c0f8" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936105 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-plugins-conf\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936176 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-erlang-cookie\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936282 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kscbv\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-kube-api-access-kscbv\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936373 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9714a2-fadf-48a3-8b71-07d7419cc713-erlang-cookie-secret\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936398 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936420 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-confd\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936474 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-plugins\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936505 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-server-conf\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936528 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936578 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9714a2-fadf-48a3-8b71-07d7419cc713-pod-info\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.936639 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-tls\") pod \"df9714a2-fadf-48a3-8b71-07d7419cc713\" (UID: \"df9714a2-fadf-48a3-8b71-07d7419cc713\") " Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.938505 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.939614 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.941208 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.943411 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.956878 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-kube-api-access-kscbv" (OuterVolumeSpecName: "kube-api-access-kscbv") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "kube-api-access-kscbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.958064 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.963362 4834 scope.go:117] "RemoveContainer" containerID="d0842ea9c1eb4c9eb1b4ebac20aeac250fca82a332954d0e335a267a9f99c8f0" Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.963618 4834 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 21 14:56:19 crc kubenswrapper[4834]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-21T14:56:12Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 21 14:56:19 crc kubenswrapper[4834]: /etc/init.d/functions: line 589: 494 Alarm clock "$@" Jan 21 14:56:19 crc kubenswrapper[4834]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-9wtcs" message=< Jan 21 14:56:19 crc kubenswrapper[4834]: Exiting ovn-controller (1) [FAILED] Jan 21 14:56:19 crc kubenswrapper[4834]: Killing ovn-controller (1) [ OK ] Jan 21 14:56:19 crc kubenswrapper[4834]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 21 14:56:19 crc kubenswrapper[4834]: 2026-01-21T14:56:12Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 21 14:56:19 crc kubenswrapper[4834]: /etc/init.d/functions: line 589: 494 Alarm clock "$@" Jan 21 14:56:19 crc kubenswrapper[4834]: > Jan 21 14:56:19 crc kubenswrapper[4834]: E0121 14:56:19.963659 4834 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 21 14:56:19 crc kubenswrapper[4834]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-21T14:56:12Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 21 14:56:19 crc kubenswrapper[4834]: /etc/init.d/functions: line 589: 494 Alarm clock "$@" Jan 21 14:56:19 crc kubenswrapper[4834]: > pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" containerID="cri-o://5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.963712 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" containerID="cri-o://5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e" gracePeriod=22 Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.964022 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.964329 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9714a2-fadf-48a3-8b71-07d7419cc713-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.966756 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 14:56:19 crc kubenswrapper[4834]: I0121 14:56:19.966842 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/df9714a2-fadf-48a3-8b71-07d7419cc713-pod-info" (OuterVolumeSpecName: "pod-info") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.006171 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5696b4bbb9-8l4r8"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.013146 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.017589 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data" (OuterVolumeSpecName: "config-data") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.040374 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.040426 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kscbv\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-kube-api-access-kscbv\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.040439 4834 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9714a2-fadf-48a3-8b71-07d7419cc713-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.040470 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.040487 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.040500 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.040510 4834 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9714a2-fadf-48a3-8b71-07d7419cc713-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.040520 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.040530 4834 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.045363 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5696b4bbb9-8l4r8"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.062481 4834 scope.go:117] "RemoveContainer" containerID="008e24c6de4a7972abcecfe67f07648df9fa7fff4e253229a970cbbe16f3e832" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.078392 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.091138 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-server-conf" (OuterVolumeSpecName: "server-conf") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.111035 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.128774 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.137097 4834 scope.go:117] "RemoveContainer" containerID="dfd1d1f4b01cbe2d02e24b2d79800e43e22f1f1484f9243a7619cca4445cac51" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.137289 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.141851 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-memcached-tls-certs\") pod \"234831ee-247b-40ae-9c71-db9d7b45d275\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.141948 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9cnj\" (UniqueName: \"kubernetes.io/projected/234831ee-247b-40ae-9c71-db9d7b45d275-kube-api-access-w9cnj\") pod \"234831ee-247b-40ae-9c71-db9d7b45d275\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.142033 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-config-data\") pod \"234831ee-247b-40ae-9c71-db9d7b45d275\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.142177 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-combined-ca-bundle\") pod \"234831ee-247b-40ae-9c71-db9d7b45d275\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.142262 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-kolla-config\") pod \"234831ee-247b-40ae-9c71-db9d7b45d275\" (UID: \"234831ee-247b-40ae-9c71-db9d7b45d275\") " Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.142679 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.142695 4834 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9714a2-fadf-48a3-8b71-07d7419cc713-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.143828 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "234831ee-247b-40ae-9c71-db9d7b45d275" (UID: "234831ee-247b-40ae-9c71-db9d7b45d275"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.144497 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-config-data" (OuterVolumeSpecName: "config-data") pod "234831ee-247b-40ae-9c71-db9d7b45d275" (UID: "234831ee-247b-40ae-9c71-db9d7b45d275"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.151407 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234831ee-247b-40ae-9c71-db9d7b45d275-kube-api-access-w9cnj" (OuterVolumeSpecName: "kube-api-access-w9cnj") pod "234831ee-247b-40ae-9c71-db9d7b45d275" (UID: "234831ee-247b-40ae-9c71-db9d7b45d275"). InnerVolumeSpecName "kube-api-access-w9cnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.151835 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "df9714a2-fadf-48a3-8b71-07d7419cc713" (UID: "df9714a2-fadf-48a3-8b71-07d7419cc713"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.161026 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.173966 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.217558 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.233269 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "234831ee-247b-40ae-9c71-db9d7b45d275" (UID: "234831ee-247b-40ae-9c71-db9d7b45d275"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.239394 4834 scope.go:117] "RemoveContainer" containerID="52bc71d8458ec442f5c03fd42e72e4aee5dfcd1add1d1cd4d9a3fbb418f22b46" Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.244964 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e is running failed: container process not found" containerID="5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.245792 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e is running failed: container process not found" containerID="5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.246883 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e is running failed: container process not found" containerID="5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.247006 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-9wtcs" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.249704 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpxg\" (UniqueName: \"kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg\") pod \"keystone-23dc-account-create-update-2fqjh\" (UID: \"5932a25a-318a-4c1d-9a10-3bd1d11125d7\") " pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.249824 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts\") pod \"keystone-23dc-account-create-update-2fqjh\" (UID: \"5932a25a-318a-4c1d-9a10-3bd1d11125d7\") " pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.250227 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9714a2-fadf-48a3-8b71-07d7419cc713-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.250249 4834 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.250269 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9cnj\" (UniqueName: \"kubernetes.io/projected/234831ee-247b-40ae-9c71-db9d7b45d275-kube-api-access-w9cnj\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.250281 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/234831ee-247b-40ae-9c71-db9d7b45d275-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.250292 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.250399 4834 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.250497 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts podName:5932a25a-318a-4c1d-9a10-3bd1d11125d7 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:22.250475107 +0000 UTC m=+1528.224824142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts") pod "keystone-23dc-account-create-update-2fqjh" (UID: "5932a25a-318a-4c1d-9a10-3bd1d11125d7") : configmap "openstack-scripts" not found Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.255402 4834 projected.go:194] Error preparing data for projected volume kube-api-access-kgpxg for pod openstack/keystone-23dc-account-create-update-2fqjh: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.255512 4834 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg podName:5932a25a-318a-4c1d-9a10-3bd1d11125d7 nodeName:}" failed. No retries permitted until 2026-01-21 14:56:22.255485883 +0000 UTC m=+1528.229834928 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kgpxg" (UniqueName: "kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg") pod "keystone-23dc-account-create-update-2fqjh" (UID: "5932a25a-318a-4c1d-9a10-3bd1d11125d7") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.320314 4834 scope.go:117] "RemoveContainer" containerID="e22110aadbd884384da0840ce5e19cbdcf2bf4d1b9fffe3c6126085fdedeb6df" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.330046 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "234831ee-247b-40ae-9c71-db9d7b45d275" (UID: "234831ee-247b-40ae-9c71-db9d7b45d275"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.345726 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5" path="/var/lib/kubelet/pods/0281f2f1-e9cd-408f-93d3-d63d5e9fc9d5/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.346672 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ff4f13-b754-4f82-accc-54ed420dce2e" path="/var/lib/kubelet/pods/07ff4f13-b754-4f82-accc-54ed420dce2e/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.347563 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e8a6dd-5bbd-4f91-9860-2b3146ba47a2" path="/var/lib/kubelet/pods/08e8a6dd-5bbd-4f91-9860-2b3146ba47a2/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.349599 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10412867-64ac-413b-8f2f-9bdac2bb8759" path="/var/lib/kubelet/pods/10412867-64ac-413b-8f2f-9bdac2bb8759/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.350862 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="367e30a8-4fb2-47e5-a2f4-5e481d37fcca" path="/var/lib/kubelet/pods/367e30a8-4fb2-47e5-a2f4-5e481d37fcca/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.351826 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e4b8c88-31ca-4212-939c-9e163ff6af52" path="/var/lib/kubelet/pods/4e4b8c88-31ca-4212-939c-9e163ff6af52/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.352228 4834 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/234831ee-247b-40ae-9c71-db9d7b45d275-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.353818 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="615eb241-8fa5-4c76-b710-19a3bd65e0ac" path="/var/lib/kubelet/pods/615eb241-8fa5-4c76-b710-19a3bd65e0ac/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.354813 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67cf94c8-2d73-4940-a873-775f2cba8ce5" path="/var/lib/kubelet/pods/67cf94c8-2d73-4940-a873-775f2cba8ce5/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.355462 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="725f14ad-f7a0-4d41-813e-19161c405300" path="/var/lib/kubelet/pods/725f14ad-f7a0-4d41-813e-19161c405300/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.357443 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84309501-c399-4d83-9876-00b58ba67b0d" path="/var/lib/kubelet/pods/84309501-c399-4d83-9876-00b58ba67b0d/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.358348 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a471c86e-9e4a-4aba-848a-75aefa12c239" path="/var/lib/kubelet/pods/a471c86e-9e4a-4aba-848a-75aefa12c239/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.363127 4834 scope.go:117] "RemoveContainer" containerID="8d770e00807ea81d0baadc2453ac237e13a01ff3a0bba99f60d58f1c729e3861" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.363732 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" path="/var/lib/kubelet/pods/b87b73b4-2715-4ce7-81b3-df0c1f57922f/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.372284 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb1c174a-a1ea-4c84-a0e3-5241055f2c28" path="/var/lib/kubelet/pods/bb1c174a-a1ea-4c84-a0e3-5241055f2c28/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.374193 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3c79dc8-1d60-46f6-add1-1783486562f2" path="/var/lib/kubelet/pods/c3c79dc8-1d60-46f6-add1-1783486562f2/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.375406 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9503bd6-1084-408a-8e1d-65d66dab4170" path="/var/lib/kubelet/pods/e9503bd6-1084-408a-8e1d-65d66dab4170/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.376754 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f53e7c29-7c71-4dba-8c9f-2a9accc74294" path="/var/lib/kubelet/pods/f53e7c29-7c71-4dba-8c9f-2a9accc74294/volumes" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.411099 4834 scope.go:117] "RemoveContainer" containerID="b0fceacb1c47e8063d1e7c22118f445dc2ab5d3565c665ac3bb3f6be80d3738d" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.458279 4834 scope.go:117] "RemoveContainer" containerID="9bf4c21b00ddd9eebc7d0010a2d80343120a83ac8f09fe51905c76215b321ac0" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.508249 4834 scope.go:117] "RemoveContainer" containerID="9cc4e607542ea15206bb43978347800db816a791a94de96842f843bffbc0cd73" Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.600345 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.600753 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.602096 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.602138 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.610486 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.613377 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.615860 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.616046 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.625150 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df9714a2-fadf-48a3-8b71-07d7419cc713","Type":"ContainerDied","Data":"b7dcbb4926c68d815faaad4fb3c4b21e147ae2566d0155d55e83e153105eca2c"} Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.625307 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.672319 4834 generic.go:334] "Generic (PLEG): container finished" podID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerID="107a731e7f2664d7f9aa9d644b794b0125c9dda7913148f699b179b99aae879d" exitCode=0 Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.672419 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerDied","Data":"107a731e7f2664d7f9aa9d644b794b0125c9dda7913148f699b179b99aae879d"} Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.675663 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.685630 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.710783 4834 generic.go:334] "Generic (PLEG): container finished" podID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerID="10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d" exitCode=0 Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.710874 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptknb" event={"ID":"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b","Type":"ContainerDied","Data":"10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d"} Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.725628 4834 scope.go:117] "RemoveContainer" containerID="74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.739033 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9wtcs_efc9a766-6bb5-4585-881f-019c2f33f096/ovn-controller/0.log" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.739109 4834 generic.go:334] "Generic (PLEG): container finished" podID="efc9a766-6bb5-4585-881f-019c2f33f096" containerID="5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e" exitCode=137 Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.739176 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wtcs" event={"ID":"efc9a766-6bb5-4585-881f-019c2f33f096","Type":"ContainerDied","Data":"5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e"} Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.800588 4834 scope.go:117] "RemoveContainer" containerID="a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.809020 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"234831ee-247b-40ae-9c71-db9d7b45d275","Type":"ContainerDied","Data":"5e1ea54b693a0b10291bc0fa799ff6cbc4ae898f048ce874887e0d166af4ae30"} Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.809149 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.809218 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-23dc-account-create-update-2fqjh" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.836007 4834 scope.go:117] "RemoveContainer" containerID="74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf" Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.836720 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf\": container with ID starting with 74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf not found: ID does not exist" containerID="74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.836780 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf"} err="failed to get container status \"74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf\": rpc error: code = NotFound desc = could not find container \"74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf\": container with ID starting with 74bf862edbf0448d042cec4905e0d870bb69dc29443adc15acfc09b231315bdf not found: ID does not exist" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.836806 4834 scope.go:117] "RemoveContainer" containerID="a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c" Jan 21 14:56:20 crc kubenswrapper[4834]: E0121 14:56:20.837362 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c\": container with ID starting with a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c not found: ID does not exist" containerID="a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.837397 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c"} err="failed to get container status \"a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c\": rpc error: code = NotFound desc = could not find container \"a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c\": container with ID starting with a5ab0017b75af7884d2ac2a0796de292ab47779855460a82e360bf37db5ed11c not found: ID does not exist" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.837425 4834 scope.go:117] "RemoveContainer" containerID="741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.876319 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.886710 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.912451 4834 scope.go:117] "RemoveContainer" containerID="41b8202e62174a8eda17f1a9b9dd2a9295f09268d93892ad31cfad9446e70c71" Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.916749 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-23dc-account-create-update-2fqjh"] Jan 21 14:56:20 crc kubenswrapper[4834]: I0121 14:56:20.926061 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-23dc-account-create-update-2fqjh"] Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.065122 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgpxg\" (UniqueName: \"kubernetes.io/projected/5932a25a-318a-4c1d-9a10-3bd1d11125d7-kube-api-access-kgpxg\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.065164 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5932a25a-318a-4c1d-9a10-3bd1d11125d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.173038 4834 scope.go:117] "RemoveContainer" containerID="8018ed8fca11c93bbc50ad4d89fee33fca796f9f20da5b7258ee155a5c1edde0" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.278253 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9wtcs_efc9a766-6bb5-4585-881f-019c2f33f096/ovn-controller/0.log" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.278393 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wtcs" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.289677 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.305953 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:21 crc kubenswrapper[4834]: E0121 14:56:21.358655 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:56:21 crc kubenswrapper[4834]: E0121 14:56:21.364807 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:56:21 crc kubenswrapper[4834]: E0121 14:56:21.367796 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 21 14:56:21 crc kubenswrapper[4834]: E0121 14:56:21.367839 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="ovn-northd" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.373284 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-sg-core-conf-yaml\") pod \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.373361 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-scripts\") pod \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.373397 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run\") pod \"efc9a766-6bb5-4585-881f-019c2f33f096\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.375465 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run" (OuterVolumeSpecName: "var-run") pod "efc9a766-6bb5-4585-881f-019c2f33f096" (UID: "efc9a766-6bb5-4585-881f-019c2f33f096"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.376566 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-combined-ca-bundle\") pod \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.376704 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht6kr\" (UniqueName: \"kubernetes.io/projected/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-kube-api-access-ht6kr\") pod \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\" (UID: \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.376756 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-ceilometer-tls-certs\") pod \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.376783 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efc9a766-6bb5-4585-881f-019c2f33f096-scripts\") pod \"efc9a766-6bb5-4585-881f-019c2f33f096\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.376805 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-config-data\") pod \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.376843 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-log-ovn\") pod \"efc9a766-6bb5-4585-881f-019c2f33f096\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.376887 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6p87\" (UniqueName: \"kubernetes.io/projected/65eff96a-de09-4e96-9fe2-21b1eaedaacc-kube-api-access-z6p87\") pod \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.376957 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-combined-ca-bundle\") pod \"efc9a766-6bb5-4585-881f-019c2f33f096\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.377005 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-log-httpd\") pod \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.377010 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "efc9a766-6bb5-4585-881f-019c2f33f096" (UID: "efc9a766-6bb5-4585-881f-019c2f33f096"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.377050 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run-ovn\") pod \"efc9a766-6bb5-4585-881f-019c2f33f096\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.377081 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-ovn-controller-tls-certs\") pod \"efc9a766-6bb5-4585-881f-019c2f33f096\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.377135 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vj46\" (UniqueName: \"kubernetes.io/projected/efc9a766-6bb5-4585-881f-019c2f33f096-kube-api-access-6vj46\") pod \"efc9a766-6bb5-4585-881f-019c2f33f096\" (UID: \"efc9a766-6bb5-4585-881f-019c2f33f096\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.377194 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts\") pod \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\" (UID: \"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.377228 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-run-httpd\") pod \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\" (UID: \"65eff96a-de09-4e96-9fe2-21b1eaedaacc\") " Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.378368 4834 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.378388 4834 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.379549 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efc9a766-6bb5-4585-881f-019c2f33f096-scripts" (OuterVolumeSpecName: "scripts") pod "efc9a766-6bb5-4585-881f-019c2f33f096" (UID: "efc9a766-6bb5-4585-881f-019c2f33f096"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.380991 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65eff96a-de09-4e96-9fe2-21b1eaedaacc" (UID: "65eff96a-de09-4e96-9fe2-21b1eaedaacc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.380890 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-scripts" (OuterVolumeSpecName: "scripts") pod "65eff96a-de09-4e96-9fe2-21b1eaedaacc" (UID: "65eff96a-de09-4e96-9fe2-21b1eaedaacc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.382009 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65eff96a-de09-4e96-9fe2-21b1eaedaacc" (UID: "65eff96a-de09-4e96-9fe2-21b1eaedaacc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.382938 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "efc9a766-6bb5-4585-881f-019c2f33f096" (UID: "efc9a766-6bb5-4585-881f-019c2f33f096"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.383007 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" (UID: "2977cc62-5ade-40e1-b2ba-1bfe044d2f0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.385774 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65eff96a-de09-4e96-9fe2-21b1eaedaacc-kube-api-access-z6p87" (OuterVolumeSpecName: "kube-api-access-z6p87") pod "65eff96a-de09-4e96-9fe2-21b1eaedaacc" (UID: "65eff96a-de09-4e96-9fe2-21b1eaedaacc"). InnerVolumeSpecName "kube-api-access-z6p87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.388530 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-kube-api-access-ht6kr" (OuterVolumeSpecName: "kube-api-access-ht6kr") pod "2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" (UID: "2977cc62-5ade-40e1-b2ba-1bfe044d2f0f"). InnerVolumeSpecName "kube-api-access-ht6kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.391121 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efc9a766-6bb5-4585-881f-019c2f33f096-kube-api-access-6vj46" (OuterVolumeSpecName: "kube-api-access-6vj46") pod "efc9a766-6bb5-4585-881f-019c2f33f096" (UID: "efc9a766-6bb5-4585-881f-019c2f33f096"). InnerVolumeSpecName "kube-api-access-6vj46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.394247 4834 scope.go:117] "RemoveContainer" containerID="741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683" Jan 21 14:56:21 crc kubenswrapper[4834]: E0121 14:56:21.407320 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683\": container with ID starting with 741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683 not found: ID does not exist" containerID="741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.407411 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683"} err="failed to get container status \"741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683\": rpc error: code = NotFound desc = could not find container \"741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683\": container with ID starting with 741e3e6b167888c588bc194f89bbefd2a67d7f41046da4b02ac637c31ce8e683 not found: ID does not exist" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.407464 4834 scope.go:117] "RemoveContainer" containerID="7d56f8927a0e745e7f1e135ef6228427a467fa5229a8d9c26e035ff6e772686c" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.429329 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efc9a766-6bb5-4585-881f-019c2f33f096" (UID: "efc9a766-6bb5-4585-881f-019c2f33f096"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.447134 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65eff96a-de09-4e96-9fe2-21b1eaedaacc" (UID: "65eff96a-de09-4e96-9fe2-21b1eaedaacc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.460297 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "65eff96a-de09-4e96-9fe2-21b1eaedaacc" (UID: "65eff96a-de09-4e96-9fe2-21b1eaedaacc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.478019 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "efc9a766-6bb5-4585-881f-019c2f33f096" (UID: "efc9a766-6bb5-4585-881f-019c2f33f096"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481127 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vj46\" (UniqueName: \"kubernetes.io/projected/efc9a766-6bb5-4585-881f-019c2f33f096-kube-api-access-6vj46\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481157 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481170 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481182 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481194 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481207 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht6kr\" (UniqueName: \"kubernetes.io/projected/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f-kube-api-access-ht6kr\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481220 4834 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481235 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efc9a766-6bb5-4585-881f-019c2f33f096-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481248 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6p87\" (UniqueName: \"kubernetes.io/projected/65eff96a-de09-4e96-9fe2-21b1eaedaacc-kube-api-access-z6p87\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481259 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481272 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65eff96a-de09-4e96-9fe2-21b1eaedaacc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481283 4834 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efc9a766-6bb5-4585-881f-019c2f33f096-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.481295 4834 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc9a766-6bb5-4585-881f-019c2f33f096-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.535035 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-config-data" (OuterVolumeSpecName: "config-data") pod "65eff96a-de09-4e96-9fe2-21b1eaedaacc" (UID: "65eff96a-de09-4e96-9fe2-21b1eaedaacc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.535445 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65eff96a-de09-4e96-9fe2-21b1eaedaacc" (UID: "65eff96a-de09-4e96-9fe2-21b1eaedaacc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.582833 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.582869 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eff96a-de09-4e96-9fe2-21b1eaedaacc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.831363 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65eff96a-de09-4e96-9fe2-21b1eaedaacc","Type":"ContainerDied","Data":"a0a613f62eca588010824ffbbb3db3da69b63515892c046e5350635a1e2a87e7"} Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.831436 4834 scope.go:117] "RemoveContainer" containerID="4c978202772d336f5ede7c461a6d979b857f71449d109c11375fecceb91b59eb" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.831581 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.856336 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9wtcs_efc9a766-6bb5-4585-881f-019c2f33f096/ovn-controller/0.log" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.856501 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wtcs" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.856582 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wtcs" event={"ID":"efc9a766-6bb5-4585-881f-019c2f33f096","Type":"ContainerDied","Data":"af63ec048c9e60d0adfb5efdeb1a771503d3b79608bba7b4a2e324dcca3954fe"} Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.865885 4834 generic.go:334] "Generic (PLEG): container finished" podID="2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" containerID="0d518f3b26bced1d0a24c01a084dcae54af88409651e0fec0bea0e6b5762886a" exitCode=0 Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.865996 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-577b97bbf9-tdqtw" event={"ID":"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9","Type":"ContainerDied","Data":"0d518f3b26bced1d0a24c01a084dcae54af88409651e0fec0bea0e6b5762886a"} Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.886510 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fxzqd" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.889430 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.889480 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fxzqd" event={"ID":"2977cc62-5ade-40e1-b2ba-1bfe044d2f0f","Type":"ContainerDied","Data":"5fe711ac553511d0849ce5ef19651d90e9b61fb81a47b6912fa814f1824bba2d"} Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.906240 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.908005 4834 scope.go:117] "RemoveContainer" containerID="cdd9893db2d5fb4b6116c9182d5161ce755afa8f174d7fe35f8d4c29d5c0807e" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.980958 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_afa0d119-4c43-4161-8e43-94de0b186cb8/ovn-northd/0.log" Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.981502 4834 generic.go:334] "Generic (PLEG): container finished" podID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" exitCode=139 Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.981814 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s2ffx" podUID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerName="registry-server" containerID="cri-o://f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5" gracePeriod=2 Jan 21 14:56:21 crc kubenswrapper[4834]: I0121 14:56:21.982072 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"afa0d119-4c43-4161-8e43-94de0b186cb8","Type":"ContainerDied","Data":"65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464"} Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.003379 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9wtcs"] Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.033155 4834 scope.go:117] "RemoveContainer" containerID="107a731e7f2664d7f9aa9d644b794b0125c9dda7913148f699b179b99aae879d" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.044798 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9wtcs"] Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.056889 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fxzqd"] Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.080572 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fxzqd"] Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.142974 4834 scope.go:117] "RemoveContainer" containerID="19c506fe91677f38040b6ac9abfe39213d5919b09eeccd51a69bcdebc4c4dc90" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.337151 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.366201 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="234831ee-247b-40ae-9c71-db9d7b45d275" path="/var/lib/kubelet/pods/234831ee-247b-40ae-9c71-db9d7b45d275/volumes" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.366839 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" path="/var/lib/kubelet/pods/2977cc62-5ade-40e1-b2ba-1bfe044d2f0f/volumes" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.367423 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5932a25a-318a-4c1d-9a10-3bd1d11125d7" path="/var/lib/kubelet/pods/5932a25a-318a-4c1d-9a10-3bd1d11125d7/volumes" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.367815 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" path="/var/lib/kubelet/pods/65eff96a-de09-4e96-9fe2-21b1eaedaacc/volumes" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.376244 4834 scope.go:117] "RemoveContainer" containerID="5ba0edba216191b9000792bf3a2737823531f96f6671827e963296102ce0654e" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.378174 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" path="/var/lib/kubelet/pods/df9714a2-fadf-48a3-8b71-07d7419cc713/volumes" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.380392 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" path="/var/lib/kubelet/pods/efc9a766-6bb5-4585-881f-019c2f33f096/volumes" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.411541 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-public-tls-certs\") pod \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.411635 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-fernet-keys\") pod \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.411768 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-combined-ca-bundle\") pod \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.411793 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-scripts\") pod \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.411819 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-credential-keys\") pod \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.412203 4834 scope.go:117] "RemoveContainer" containerID="5c65389c531f75f514e8fc9b44607b68a6c8d3ed7db20c8c4d79dd6242ee95ab" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.412684 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-config-data\") pod \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.412720 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n64sg\" (UniqueName: \"kubernetes.io/projected/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-kube-api-access-n64sg\") pod \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.412739 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-internal-tls-certs\") pod \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\" (UID: \"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.417711 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-kube-api-access-n64sg" (OuterVolumeSpecName: "kube-api-access-n64sg") pod "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" (UID: "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9"). InnerVolumeSpecName "kube-api-access-n64sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.418829 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" (UID: "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.419396 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" (UID: "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.438262 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-scripts" (OuterVolumeSpecName: "scripts") pod "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" (UID: "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.464430 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-config-data" (OuterVolumeSpecName: "config-data") pod "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" (UID: "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.464680 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" (UID: "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.481908 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_afa0d119-4c43-4161-8e43-94de0b186cb8/ovn-northd/0.log" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.482133 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.495245 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" (UID: "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.514264 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.514303 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.514313 4834 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.514326 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.514338 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n64sg\" (UniqueName: \"kubernetes.io/projected/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-kube-api-access-n64sg\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.514349 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.514357 4834 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.515070 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" (UID: "2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.615336 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wsp9\" (UniqueName: \"kubernetes.io/projected/afa0d119-4c43-4161-8e43-94de0b186cb8-kube-api-access-4wsp9\") pod \"afa0d119-4c43-4161-8e43-94de0b186cb8\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.615402 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-rundir\") pod \"afa0d119-4c43-4161-8e43-94de0b186cb8\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.615479 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-config\") pod \"afa0d119-4c43-4161-8e43-94de0b186cb8\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.615565 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-metrics-certs-tls-certs\") pod \"afa0d119-4c43-4161-8e43-94de0b186cb8\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.615595 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-northd-tls-certs\") pod \"afa0d119-4c43-4161-8e43-94de0b186cb8\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.615686 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-combined-ca-bundle\") pod \"afa0d119-4c43-4161-8e43-94de0b186cb8\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.615731 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-scripts\") pod \"afa0d119-4c43-4161-8e43-94de0b186cb8\" (UID: \"afa0d119-4c43-4161-8e43-94de0b186cb8\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.616119 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.617204 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "afa0d119-4c43-4161-8e43-94de0b186cb8" (UID: "afa0d119-4c43-4161-8e43-94de0b186cb8"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.618091 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-scripts" (OuterVolumeSpecName: "scripts") pod "afa0d119-4c43-4161-8e43-94de0b186cb8" (UID: "afa0d119-4c43-4161-8e43-94de0b186cb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.621324 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-config" (OuterVolumeSpecName: "config") pod "afa0d119-4c43-4161-8e43-94de0b186cb8" (UID: "afa0d119-4c43-4161-8e43-94de0b186cb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.644688 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.651266 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa0d119-4c43-4161-8e43-94de0b186cb8-kube-api-access-4wsp9" (OuterVolumeSpecName: "kube-api-access-4wsp9") pod "afa0d119-4c43-4161-8e43-94de0b186cb8" (UID: "afa0d119-4c43-4161-8e43-94de0b186cb8"). InnerVolumeSpecName "kube-api-access-4wsp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.707449 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afa0d119-4c43-4161-8e43-94de0b186cb8" (UID: "afa0d119-4c43-4161-8e43-94de0b186cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.719564 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-utilities\") pod \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.719684 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmrqw\" (UniqueName: \"kubernetes.io/projected/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-kube-api-access-lmrqw\") pod \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.719741 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-catalog-content\") pod \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\" (UID: \"3bd802b1-4bcc-4604-a82e-5e84a0f0338e\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.720189 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.720204 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.720214 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wsp9\" (UniqueName: \"kubernetes.io/projected/afa0d119-4c43-4161-8e43-94de0b186cb8-kube-api-access-4wsp9\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.720226 4834 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.720234 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa0d119-4c43-4161-8e43-94de0b186cb8-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.720475 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-utilities" (OuterVolumeSpecName: "utilities") pod "3bd802b1-4bcc-4604-a82e-5e84a0f0338e" (UID: "3bd802b1-4bcc-4604-a82e-5e84a0f0338e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.723672 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "afa0d119-4c43-4161-8e43-94de0b186cb8" (UID: "afa0d119-4c43-4161-8e43-94de0b186cb8"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.725804 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-kube-api-access-lmrqw" (OuterVolumeSpecName: "kube-api-access-lmrqw") pod "3bd802b1-4bcc-4604-a82e-5e84a0f0338e" (UID: "3bd802b1-4bcc-4604-a82e-5e84a0f0338e"). InnerVolumeSpecName "kube-api-access-lmrqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.740764 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "afa0d119-4c43-4161-8e43-94de0b186cb8" (UID: "afa0d119-4c43-4161-8e43-94de0b186cb8"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.760345 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.798818 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bd802b1-4bcc-4604-a82e-5e84a0f0338e" (UID: "3bd802b1-4bcc-4604-a82e-5e84a0f0338e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.821677 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.821735 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-galera-tls-certs\") pod \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.821777 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-combined-ca-bundle\") pod \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.821806 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-generated\") pod \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.821850 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpj8l\" (UniqueName: \"kubernetes.io/projected/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kube-api-access-qpj8l\") pod \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.821907 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kolla-config\") pod \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.821948 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-operator-scripts\") pod \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.821974 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-default\") pod \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\" (UID: \"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee\") " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.822273 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.822295 4834 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.822306 4834 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/afa0d119-4c43-4161-8e43-94de0b186cb8-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.822320 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmrqw\" (UniqueName: \"kubernetes.io/projected/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-kube-api-access-lmrqw\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.822331 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd802b1-4bcc-4604-a82e-5e84a0f0338e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.822879 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" (UID: "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.823023 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" (UID: "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.824354 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" (UID: "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.824558 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" (UID: "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.826282 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kube-api-access-qpj8l" (OuterVolumeSpecName: "kube-api-access-qpj8l") pod "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" (UID: "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee"). InnerVolumeSpecName "kube-api-access-qpj8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.832890 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "mysql-db") pod "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" (UID: "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.855092 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" (UID: "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.868874 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" (UID: "d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.924328 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.924381 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.924421 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.924431 4834 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.924440 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.924450 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.924461 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpj8l\" (UniqueName: \"kubernetes.io/projected/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kube-api-access-qpj8l\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.924471 4834 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.944636 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.995176 4834 generic.go:334] "Generic (PLEG): container finished" podID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerID="f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5" exitCode=0 Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.995232 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2ffx" Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.995247 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ffx" event={"ID":"3bd802b1-4bcc-4604-a82e-5e84a0f0338e","Type":"ContainerDied","Data":"f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5"} Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.995290 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2ffx" event={"ID":"3bd802b1-4bcc-4604-a82e-5e84a0f0338e","Type":"ContainerDied","Data":"a7eb0b76b29c662abe7458c9436b1ced70300eea382d8639c019cae4f7113266"} Jan 21 14:56:22 crc kubenswrapper[4834]: I0121 14:56:22.995312 4834 scope.go:117] "RemoveContainer" containerID="f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:22.999255 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_afa0d119-4c43-4161-8e43-94de0b186cb8/ovn-northd/0.log" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:22.999371 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"afa0d119-4c43-4161-8e43-94de0b186cb8","Type":"ContainerDied","Data":"7e0e235525d2749bdc2e23b8babceed11902ea21da5b237e3fd84eca239eb1a5"} Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:22.999440 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.002816 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-577b97bbf9-tdqtw" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.003759 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-577b97bbf9-tdqtw" event={"ID":"2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9","Type":"ContainerDied","Data":"0b0e03b6e28c230eadd75717a4ce66633d5d675124e510dff72fbe45294707ea"} Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.010490 4834 generic.go:334] "Generic (PLEG): container finished" podID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerID="41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8" exitCode=0 Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.010606 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptknb" event={"ID":"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b","Type":"ContainerDied","Data":"41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8"} Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.026064 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.028315 4834 generic.go:334] "Generic (PLEG): container finished" podID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" containerID="c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e" exitCode=0 Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.028420 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee","Type":"ContainerDied","Data":"c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e"} Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.028481 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee","Type":"ContainerDied","Data":"9d324abad33ce58ea0c970d989daf4ba3bed4727ecba0770e465e2101e1f5904"} Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.028593 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.061751 4834 scope.go:117] "RemoveContainer" containerID="0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.083269 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.090470 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.114210 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-577b97bbf9-tdqtw"] Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.122866 4834 scope.go:117] "RemoveContainer" containerID="94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.125218 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-577b97bbf9-tdqtw"] Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.131951 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.138832 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.146367 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2ffx"] Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.152412 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s2ffx"] Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.153968 4834 scope.go:117] "RemoveContainer" containerID="f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5" Jan 21 14:56:23 crc kubenswrapper[4834]: E0121 14:56:23.157272 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5\": container with ID starting with f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5 not found: ID does not exist" containerID="f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.157337 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5"} err="failed to get container status \"f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5\": rpc error: code = NotFound desc = could not find container \"f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5\": container with ID starting with f642bb2d33a76479c1ccfc5f7cbd26a3f282e6406dba61ed5eafd17172739ce5 not found: ID does not exist" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.157369 4834 scope.go:117] "RemoveContainer" containerID="0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64" Jan 21 14:56:23 crc kubenswrapper[4834]: E0121 14:56:23.159093 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64\": container with ID starting with 0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64 not found: ID does not exist" containerID="0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.159207 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64"} err="failed to get container status \"0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64\": rpc error: code = NotFound desc = could not find container \"0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64\": container with ID starting with 0472ef362c81b03e3103920d08023b7d8e035667375b37c1caf3eb7bd2ca4d64 not found: ID does not exist" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.159246 4834 scope.go:117] "RemoveContainer" containerID="94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c" Jan 21 14:56:23 crc kubenswrapper[4834]: E0121 14:56:23.159634 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c\": container with ID starting with 94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c not found: ID does not exist" containerID="94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.159679 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c"} err="failed to get container status \"94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c\": rpc error: code = NotFound desc = could not find container \"94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c\": container with ID starting with 94bf5f51b3b4b8698da16b26634ded707910b936d5c478596f8295ec8f4c553c not found: ID does not exist" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.159715 4834 scope.go:117] "RemoveContainer" containerID="fa01f639bc91875c2d0bcb559a386c138421ca21854b35458ce844de676ca39a" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.180557 4834 scope.go:117] "RemoveContainer" containerID="65ef3aa52c1d731098648bae444e93a7363d1d557c7ccf802926252d65a51464" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.203614 4834 scope.go:117] "RemoveContainer" containerID="0d518f3b26bced1d0a24c01a084dcae54af88409651e0fec0bea0e6b5762886a" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.230051 4834 scope.go:117] "RemoveContainer" containerID="c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.254985 4834 scope.go:117] "RemoveContainer" containerID="a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.281846 4834 scope.go:117] "RemoveContainer" containerID="c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e" Jan 21 14:56:23 crc kubenswrapper[4834]: E0121 14:56:23.283541 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e\": container with ID starting with c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e not found: ID does not exist" containerID="c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.283612 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e"} err="failed to get container status \"c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e\": rpc error: code = NotFound desc = could not find container \"c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e\": container with ID starting with c56dd1550e99dff765380b62d7eeac2200b32644c8d5cad3206f221fcce42b3e not found: ID does not exist" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.283651 4834 scope.go:117] "RemoveContainer" containerID="a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2" Jan 21 14:56:23 crc kubenswrapper[4834]: E0121 14:56:23.284542 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2\": container with ID starting with a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2 not found: ID does not exist" containerID="a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2" Jan 21 14:56:23 crc kubenswrapper[4834]: I0121 14:56:23.284609 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2"} err="failed to get container status \"a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2\": rpc error: code = NotFound desc = could not find container \"a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2\": container with ID starting with a304e466ffac2360fcc61f939a7f1c3fbe13a54cbc0762835ab1be3c43ad38d2 not found: ID does not exist" Jan 21 14:56:24 crc kubenswrapper[4834]: I0121 14:56:24.054828 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptknb" event={"ID":"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b","Type":"ContainerStarted","Data":"6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6"} Jan 21 14:56:24 crc kubenswrapper[4834]: I0121 14:56:24.096665 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ptknb" podStartSLOduration=6.325592927 podStartE2EDuration="9.096639205s" podCreationTimestamp="2026-01-21 14:56:15 +0000 UTC" firstStartedPulling="2026-01-21 14:56:20.725189185 +0000 UTC m=+1526.699538230" lastFinishedPulling="2026-01-21 14:56:23.496235463 +0000 UTC m=+1529.470584508" observedRunningTime="2026-01-21 14:56:24.07594232 +0000 UTC m=+1530.050291385" watchObservedRunningTime="2026-01-21 14:56:24.096639205 +0000 UTC m=+1530.070988250" Jan 21 14:56:24 crc kubenswrapper[4834]: I0121 14:56:24.354572 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" path="/var/lib/kubelet/pods/2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9/volumes" Jan 21 14:56:24 crc kubenswrapper[4834]: I0121 14:56:24.355140 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" path="/var/lib/kubelet/pods/3bd802b1-4bcc-4604-a82e-5e84a0f0338e/volumes" Jan 21 14:56:24 crc kubenswrapper[4834]: I0121 14:56:24.355835 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" path="/var/lib/kubelet/pods/afa0d119-4c43-4161-8e43-94de0b186cb8/volumes" Jan 21 14:56:24 crc kubenswrapper[4834]: I0121 14:56:24.357082 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" path="/var/lib/kubelet/pods/d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee/volumes" Jan 21 14:56:25 crc kubenswrapper[4834]: E0121 14:56:25.600625 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:25 crc kubenswrapper[4834]: E0121 14:56:25.601660 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:25 crc kubenswrapper[4834]: E0121 14:56:25.602013 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:25 crc kubenswrapper[4834]: E0121 14:56:25.602048 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" Jan 21 14:56:25 crc kubenswrapper[4834]: E0121 14:56:25.602454 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:25 crc kubenswrapper[4834]: E0121 14:56:25.604563 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:25 crc kubenswrapper[4834]: E0121 14:56:25.606257 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:25 crc kubenswrapper[4834]: E0121 14:56:25.606317 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" Jan 21 14:56:26 crc kubenswrapper[4834]: I0121 14:56:26.491112 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:26 crc kubenswrapper[4834]: I0121 14:56:26.491570 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:26 crc kubenswrapper[4834]: I0121 14:56:26.541464 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:28 crc kubenswrapper[4834]: I0121 14:56:28.154107 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:28 crc kubenswrapper[4834]: I0121 14:56:28.870734 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptknb"] Jan 21 14:56:30 crc kubenswrapper[4834]: I0121 14:56:30.122343 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ptknb" podUID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerName="registry-server" containerID="cri-o://6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6" gracePeriod=2 Jan 21 14:56:30 crc kubenswrapper[4834]: E0121 14:56:30.602162 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:30 crc kubenswrapper[4834]: E0121 14:56:30.604779 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:30 crc kubenswrapper[4834]: E0121 14:56:30.609079 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:30 crc kubenswrapper[4834]: E0121 14:56:30.609165 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" Jan 21 14:56:30 crc kubenswrapper[4834]: E0121 14:56:30.609557 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:30 crc kubenswrapper[4834]: E0121 14:56:30.626093 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:30 crc kubenswrapper[4834]: E0121 14:56:30.652058 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:30 crc kubenswrapper[4834]: E0121 14:56:30.652150 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" Jan 21 14:56:30 crc kubenswrapper[4834]: I0121 14:56:30.872341 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:30 crc kubenswrapper[4834]: I0121 14:56:30.967634 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-utilities\") pod \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " Jan 21 14:56:30 crc kubenswrapper[4834]: I0121 14:56:30.967744 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w2zs\" (UniqueName: \"kubernetes.io/projected/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-kube-api-access-9w2zs\") pod \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " Jan 21 14:56:30 crc kubenswrapper[4834]: I0121 14:56:30.967803 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-catalog-content\") pod \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\" (UID: \"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b\") " Jan 21 14:56:30 crc kubenswrapper[4834]: I0121 14:56:30.969153 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-utilities" (OuterVolumeSpecName: "utilities") pod "3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" (UID: "3aab7dd9-42d0-46b5-95d7-1d98b5f5521b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4834]: I0121 14:56:30.981197 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-kube-api-access-9w2zs" (OuterVolumeSpecName: "kube-api-access-9w2zs") pod "3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" (UID: "3aab7dd9-42d0-46b5-95d7-1d98b5f5521b"). InnerVolumeSpecName "kube-api-access-9w2zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:30 crc kubenswrapper[4834]: I0121 14:56:30.999459 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" (UID: "3aab7dd9-42d0-46b5-95d7-1d98b5f5521b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.069556 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.069615 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w2zs\" (UniqueName: \"kubernetes.io/projected/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-kube-api-access-9w2zs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.069629 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.133662 4834 generic.go:334] "Generic (PLEG): container finished" podID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerID="6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6" exitCode=0 Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.133716 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptknb" event={"ID":"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b","Type":"ContainerDied","Data":"6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6"} Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.133750 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptknb" event={"ID":"3aab7dd9-42d0-46b5-95d7-1d98b5f5521b","Type":"ContainerDied","Data":"21c6cdd6527c5de19003e3fd0a135df6eaaa9e07c4191293c107b7ae1eb02afe"} Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.133771 4834 scope.go:117] "RemoveContainer" containerID="6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.133792 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptknb" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.157202 4834 scope.go:117] "RemoveContainer" containerID="41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.177435 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptknb"] Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.183247 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptknb"] Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.196477 4834 scope.go:117] "RemoveContainer" containerID="10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.261038 4834 scope.go:117] "RemoveContainer" containerID="6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6" Jan 21 14:56:31 crc kubenswrapper[4834]: E0121 14:56:31.264206 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6\": container with ID starting with 6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6 not found: ID does not exist" containerID="6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.264262 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6"} err="failed to get container status \"6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6\": rpc error: code = NotFound desc = could not find container \"6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6\": container with ID starting with 6fe16c394b2b790ae18f543239223364bdcf47dbbfa5ebf66b06bf542e9101e6 not found: ID does not exist" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.264319 4834 scope.go:117] "RemoveContainer" containerID="41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8" Jan 21 14:56:31 crc kubenswrapper[4834]: E0121 14:56:31.265176 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8\": container with ID starting with 41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8 not found: ID does not exist" containerID="41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.265209 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8"} err="failed to get container status \"41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8\": rpc error: code = NotFound desc = could not find container \"41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8\": container with ID starting with 41dbba880cba07eb794f15c8b9f41e7a676670f929306f69e70f648ed83f41b8 not found: ID does not exist" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.265253 4834 scope.go:117] "RemoveContainer" containerID="10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d" Jan 21 14:56:31 crc kubenswrapper[4834]: E0121 14:56:31.266055 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d\": container with ID starting with 10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d not found: ID does not exist" containerID="10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d" Jan 21 14:56:31 crc kubenswrapper[4834]: I0121 14:56:31.266091 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d"} err="failed to get container status \"10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d\": rpc error: code = NotFound desc = could not find container \"10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d\": container with ID starting with 10115a585e866ca45e059b2014d77fad69ca8d4f86e3ef469ca8c8f37f19ae0d not found: ID does not exist" Jan 21 14:56:32 crc kubenswrapper[4834]: I0121 14:56:32.334405 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" path="/var/lib/kubelet/pods/3aab7dd9-42d0-46b5-95d7-1d98b5f5521b/volumes" Jan 21 14:56:33 crc kubenswrapper[4834]: I0121 14:56:33.325247 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:56:33 crc kubenswrapper[4834]: E0121 14:56:33.325667 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.173877 4834 generic.go:334] "Generic (PLEG): container finished" podID="507328e4-20c4-4e84-b781-e4889419607e" containerID="5021b65d7845ef850fa0bbc7c486df5fe601a95715feb8020d57c8b3cb5c5c7e" exitCode=0 Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.173942 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-87996dbdf-vzvsk" event={"ID":"507328e4-20c4-4e84-b781-e4889419607e","Type":"ContainerDied","Data":"5021b65d7845ef850fa0bbc7c486df5fe601a95715feb8020d57c8b3cb5c5c7e"} Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.263550 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d8qrb"] Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.263877 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerName="barbican-keystone-listener" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.263893 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerName="barbican-keystone-listener" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.263904 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="sg-core" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.263910 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="sg-core" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.263920 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerName="extract-utilities" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.263944 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerName="extract-utilities" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.263956 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" containerName="setup-container" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.263963 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" containerName="setup-container" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.263975 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ff4f13-b754-4f82-accc-54ed420dce2e" containerName="nova-cell1-conductor-conductor" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.263982 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ff4f13-b754-4f82-accc-54ed420dce2e" containerName="nova-cell1-conductor-conductor" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.263991 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerName="registry-server" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.263996 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerName="registry-server" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264009 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" containerName="mariadb-account-create-update" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264016 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" containerName="mariadb-account-create-update" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264026 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="ceilometer-central-agent" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264032 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="ceilometer-central-agent" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264042 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3c79dc8-1d60-46f6-add1-1783486562f2" containerName="kube-state-metrics" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264049 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c79dc8-1d60-46f6-add1-1783486562f2" containerName="kube-state-metrics" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264063 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerName="extract-content" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264070 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerName="extract-content" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264079 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerName="barbican-keystone-listener-log" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264088 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerName="barbican-keystone-listener-log" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264098 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" containerName="rabbitmq" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264105 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" containerName="rabbitmq" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264112 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" containerName="rabbitmq" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264118 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" containerName="rabbitmq" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264125 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" containerName="setup-container" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264133 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" containerName="setup-container" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264147 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="ovn-northd" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264160 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="ovn-northd" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264170 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="openstack-network-exporter" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264180 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="openstack-network-exporter" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264194 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="proxy-httpd" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264201 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="proxy-httpd" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264213 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerName="extract-utilities" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264221 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerName="extract-utilities" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264231 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" containerName="mysql-bootstrap" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264238 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" containerName="mysql-bootstrap" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264250 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84309501-c399-4d83-9876-00b58ba67b0d" containerName="barbican-worker-log" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264259 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="84309501-c399-4d83-9876-00b58ba67b0d" containerName="barbican-worker-log" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264271 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="ceilometer-notification-agent" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264279 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="ceilometer-notification-agent" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264293 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="234831ee-247b-40ae-9c71-db9d7b45d275" containerName="memcached" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264301 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="234831ee-247b-40ae-9c71-db9d7b45d275" containerName="memcached" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264309 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9503bd6-1084-408a-8e1d-65d66dab4170" containerName="nova-scheduler-scheduler" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264317 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9503bd6-1084-408a-8e1d-65d66dab4170" containerName="nova-scheduler-scheduler" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264328 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerName="extract-content" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264334 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerName="extract-content" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264344 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264351 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264358 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" containerName="galera" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264364 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" containerName="galera" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264376 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerName="registry-server" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264383 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerName="registry-server" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264392 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" containerName="mariadb-account-create-update" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264399 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" containerName="mariadb-account-create-update" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264407 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" containerName="keystone-api" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264413 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" containerName="keystone-api" Jan 21 14:56:34 crc kubenswrapper[4834]: E0121 14:56:34.264423 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84309501-c399-4d83-9876-00b58ba67b0d" containerName="barbican-worker" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.264430 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="84309501-c399-4d83-9876-00b58ba67b0d" containerName="barbican-worker" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267228 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerName="barbican-keystone-listener" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267284 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="proxy-httpd" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267304 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9714a2-fadf-48a3-8b71-07d7419cc713" containerName="rabbitmq" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267320 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd802b1-4bcc-4604-a82e-5e84a0f0338e" containerName="registry-server" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267331 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="234831ee-247b-40ae-9c71-db9d7b45d275" containerName="memcached" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267350 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3c79dc8-1d60-46f6-add1-1783486562f2" containerName="kube-state-metrics" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267367 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" containerName="mariadb-account-create-update" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267375 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cf94c8-2d73-4940-a873-775f2cba8ce5" containerName="barbican-keystone-listener-log" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267386 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="ceilometer-central-agent" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267394 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="ovn-northd" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267406 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87b73b4-2715-4ce7-81b3-df0c1f57922f" containerName="rabbitmq" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267415 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4223c5-ca7c-4eb7-a6d6-fc7f4c9dd9e9" containerName="keystone-api" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267425 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ae24ac-0d2e-4d6d-9417-9f3f7f8081ee" containerName="galera" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267435 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa0d119-4c43-4161-8e43-94de0b186cb8" containerName="openstack-network-exporter" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267443 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9503bd6-1084-408a-8e1d-65d66dab4170" containerName="nova-scheduler-scheduler" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267452 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="sg-core" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267461 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="65eff96a-de09-4e96-9fe2-21b1eaedaacc" containerName="ceilometer-notification-agent" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267469 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc9a766-6bb5-4585-881f-019c2f33f096" containerName="ovn-controller" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267478 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aab7dd9-42d0-46b5-95d7-1d98b5f5521b" containerName="registry-server" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267492 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="84309501-c399-4d83-9876-00b58ba67b0d" containerName="barbican-worker-log" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267499 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ff4f13-b754-4f82-accc-54ed420dce2e" containerName="nova-cell1-conductor-conductor" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267509 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2977cc62-5ade-40e1-b2ba-1bfe044d2f0f" containerName="mariadb-account-create-update" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.267518 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="84309501-c399-4d83-9876-00b58ba67b0d" containerName="barbican-worker" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.268829 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.293777 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8qrb"] Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.325112 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-utilities\") pod \"certified-operators-d8qrb\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.325179 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-catalog-content\") pod \"certified-operators-d8qrb\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.325207 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpx5m\" (UniqueName: \"kubernetes.io/projected/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-kube-api-access-bpx5m\") pod \"certified-operators-d8qrb\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.434512 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-utilities\") pod \"certified-operators-d8qrb\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.434580 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-catalog-content\") pod \"certified-operators-d8qrb\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.434599 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpx5m\" (UniqueName: \"kubernetes.io/projected/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-kube-api-access-bpx5m\") pod \"certified-operators-d8qrb\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.435459 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-utilities\") pod \"certified-operators-d8qrb\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.436245 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-catalog-content\") pod \"certified-operators-d8qrb\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.482727 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpx5m\" (UniqueName: \"kubernetes.io/projected/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-kube-api-access-bpx5m\") pod \"certified-operators-d8qrb\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.610750 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.620285 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.637597 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-httpd-config\") pod \"507328e4-20c4-4e84-b781-e4889419607e\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.637645 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-internal-tls-certs\") pod \"507328e4-20c4-4e84-b781-e4889419607e\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.637683 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-ovndb-tls-certs\") pod \"507328e4-20c4-4e84-b781-e4889419607e\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.637702 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-config\") pod \"507328e4-20c4-4e84-b781-e4889419607e\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.637842 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-combined-ca-bundle\") pod \"507328e4-20c4-4e84-b781-e4889419607e\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.637888 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-public-tls-certs\") pod \"507328e4-20c4-4e84-b781-e4889419607e\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.637913 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdgj\" (UniqueName: \"kubernetes.io/projected/507328e4-20c4-4e84-b781-e4889419607e-kube-api-access-7zdgj\") pod \"507328e4-20c4-4e84-b781-e4889419607e\" (UID: \"507328e4-20c4-4e84-b781-e4889419607e\") " Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.658179 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "507328e4-20c4-4e84-b781-e4889419607e" (UID: "507328e4-20c4-4e84-b781-e4889419607e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.658258 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507328e4-20c4-4e84-b781-e4889419607e-kube-api-access-7zdgj" (OuterVolumeSpecName: "kube-api-access-7zdgj") pod "507328e4-20c4-4e84-b781-e4889419607e" (UID: "507328e4-20c4-4e84-b781-e4889419607e"). InnerVolumeSpecName "kube-api-access-7zdgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.739715 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.739759 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zdgj\" (UniqueName: \"kubernetes.io/projected/507328e4-20c4-4e84-b781-e4889419607e-kube-api-access-7zdgj\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.751243 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "507328e4-20c4-4e84-b781-e4889419607e" (UID: "507328e4-20c4-4e84-b781-e4889419607e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.753447 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "507328e4-20c4-4e84-b781-e4889419607e" (UID: "507328e4-20c4-4e84-b781-e4889419607e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.755369 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-config" (OuterVolumeSpecName: "config") pod "507328e4-20c4-4e84-b781-e4889419607e" (UID: "507328e4-20c4-4e84-b781-e4889419607e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.769080 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "507328e4-20c4-4e84-b781-e4889419607e" (UID: "507328e4-20c4-4e84-b781-e4889419607e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.772149 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "507328e4-20c4-4e84-b781-e4889419607e" (UID: "507328e4-20c4-4e84-b781-e4889419607e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.841332 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.841374 4834 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.841384 4834 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.841393 4834 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:34 crc kubenswrapper[4834]: I0121 14:56:34.841403 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/507328e4-20c4-4e84-b781-e4889419607e-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:35 crc kubenswrapper[4834]: I0121 14:56:35.185898 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-87996dbdf-vzvsk" event={"ID":"507328e4-20c4-4e84-b781-e4889419607e","Type":"ContainerDied","Data":"fe157082c85ca69b545df4d7e709257dac1a932dabc26e0aa4e5dbff158eced6"} Jan 21 14:56:35 crc kubenswrapper[4834]: I0121 14:56:35.185998 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-87996dbdf-vzvsk" Jan 21 14:56:35 crc kubenswrapper[4834]: I0121 14:56:35.186239 4834 scope.go:117] "RemoveContainer" containerID="0753d72f76c2ebcc2cbf477c10a770f53df938062ba6df9b3f55fe8788125c99" Jan 21 14:56:35 crc kubenswrapper[4834]: I0121 14:56:35.214914 4834 scope.go:117] "RemoveContainer" containerID="5021b65d7845ef850fa0bbc7c486df5fe601a95715feb8020d57c8b3cb5c5c7e" Jan 21 14:56:35 crc kubenswrapper[4834]: I0121 14:56:35.225188 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-87996dbdf-vzvsk"] Jan 21 14:56:35 crc kubenswrapper[4834]: I0121 14:56:35.233598 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-87996dbdf-vzvsk"] Jan 21 14:56:35 crc kubenswrapper[4834]: I0121 14:56:35.262782 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8qrb"] Jan 21 14:56:35 crc kubenswrapper[4834]: E0121 14:56:35.600689 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:35 crc kubenswrapper[4834]: E0121 14:56:35.601114 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:35 crc kubenswrapper[4834]: E0121 14:56:35.601394 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:35 crc kubenswrapper[4834]: E0121 14:56:35.601421 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" Jan 21 14:56:35 crc kubenswrapper[4834]: E0121 14:56:35.602590 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:35 crc kubenswrapper[4834]: E0121 14:56:35.604165 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:35 crc kubenswrapper[4834]: E0121 14:56:35.605230 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:35 crc kubenswrapper[4834]: E0121 14:56:35.605272 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" Jan 21 14:56:36 crc kubenswrapper[4834]: I0121 14:56:36.200124 4834 generic.go:334] "Generic (PLEG): container finished" podID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerID="e8d819826c8dbe9687e5d0fb0e9fdf4fcb5b8d60afb4142e6a4dbf46282d6fca" exitCode=0 Jan 21 14:56:36 crc kubenswrapper[4834]: I0121 14:56:36.200515 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8qrb" event={"ID":"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac","Type":"ContainerDied","Data":"e8d819826c8dbe9687e5d0fb0e9fdf4fcb5b8d60afb4142e6a4dbf46282d6fca"} Jan 21 14:56:36 crc kubenswrapper[4834]: I0121 14:56:36.200544 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8qrb" event={"ID":"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac","Type":"ContainerStarted","Data":"1801adbba1d4cd117979683785c60587e8a93c290fbf0050314177e8df0966a8"} Jan 21 14:56:36 crc kubenswrapper[4834]: I0121 14:56:36.335276 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507328e4-20c4-4e84-b781-e4889419607e" path="/var/lib/kubelet/pods/507328e4-20c4-4e84-b781-e4889419607e/volumes" Jan 21 14:56:37 crc kubenswrapper[4834]: I0121 14:56:37.211963 4834 generic.go:334] "Generic (PLEG): container finished" podID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerID="acc303420528299d3424a9554c1ca62742753bb22140879238dcebdf5b8d1cd9" exitCode=0 Jan 21 14:56:37 crc kubenswrapper[4834]: I0121 14:56:37.212029 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8qrb" event={"ID":"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac","Type":"ContainerDied","Data":"acc303420528299d3424a9554c1ca62742753bb22140879238dcebdf5b8d1cd9"} Jan 21 14:56:38 crc kubenswrapper[4834]: I0121 14:56:38.226133 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8qrb" event={"ID":"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac","Type":"ContainerStarted","Data":"64af7a59b17fc68dbd006c245a17e5ed4b3ad7c98fcbc0f97a975e93ec8dd1c8"} Jan 21 14:56:38 crc kubenswrapper[4834]: I0121 14:56:38.255172 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d8qrb" podStartSLOduration=2.758367867 podStartE2EDuration="4.255141614s" podCreationTimestamp="2026-01-21 14:56:34 +0000 UTC" firstStartedPulling="2026-01-21 14:56:36.202395168 +0000 UTC m=+1542.176744213" lastFinishedPulling="2026-01-21 14:56:37.699168915 +0000 UTC m=+1543.673517960" observedRunningTime="2026-01-21 14:56:38.24827808 +0000 UTC m=+1544.222627125" watchObservedRunningTime="2026-01-21 14:56:38.255141614 +0000 UTC m=+1544.229490649" Jan 21 14:56:40 crc kubenswrapper[4834]: E0121 14:56:40.600808 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:40 crc kubenswrapper[4834]: E0121 14:56:40.601672 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:40 crc kubenswrapper[4834]: E0121 14:56:40.601965 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 21 14:56:40 crc kubenswrapper[4834]: E0121 14:56:40.601992 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" Jan 21 14:56:40 crc kubenswrapper[4834]: E0121 14:56:40.602508 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:40 crc kubenswrapper[4834]: E0121 14:56:40.604136 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:40 crc kubenswrapper[4834]: E0121 14:56:40.605917 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 21 14:56:40 crc kubenswrapper[4834]: E0121 14:56:40.606033 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ztq6r" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.014687 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.059482 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") pod \"835da3fd-0497-4072-9d76-122d19300787\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.059530 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6wtd\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-kube-api-access-r6wtd\") pod \"835da3fd-0497-4072-9d76-122d19300787\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.059592 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"835da3fd-0497-4072-9d76-122d19300787\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.059661 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-lock\") pod \"835da3fd-0497-4072-9d76-122d19300787\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.059722 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-cache\") pod \"835da3fd-0497-4072-9d76-122d19300787\" (UID: \"835da3fd-0497-4072-9d76-122d19300787\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.060770 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-cache" (OuterVolumeSpecName: "cache") pod "835da3fd-0497-4072-9d76-122d19300787" (UID: "835da3fd-0497-4072-9d76-122d19300787"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.061442 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-lock" (OuterVolumeSpecName: "lock") pod "835da3fd-0497-4072-9d76-122d19300787" (UID: "835da3fd-0497-4072-9d76-122d19300787"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.070114 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "835da3fd-0497-4072-9d76-122d19300787" (UID: "835da3fd-0497-4072-9d76-122d19300787"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.070232 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-kube-api-access-r6wtd" (OuterVolumeSpecName: "kube-api-access-r6wtd") pod "835da3fd-0497-4072-9d76-122d19300787" (UID: "835da3fd-0497-4072-9d76-122d19300787"). InnerVolumeSpecName "kube-api-access-r6wtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.084939 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "835da3fd-0497-4072-9d76-122d19300787" (UID: "835da3fd-0497-4072-9d76-122d19300787"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.162370 4834 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-cache\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.162430 4834 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.162446 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6wtd\" (UniqueName: \"kubernetes.io/projected/835da3fd-0497-4072-9d76-122d19300787-kube-api-access-r6wtd\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.162497 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.162508 4834 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/835da3fd-0497-4072-9d76-122d19300787-lock\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.195996 4834 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.263952 4834 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.269387 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ztq6r_5f229152-a987-497e-8777-937b4f6880d0/ovs-vswitchd/0.log" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.270700 4834 generic.go:334] "Generic (PLEG): container finished" podID="5f229152-a987-497e-8777-937b4f6880d0" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" exitCode=137 Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.270777 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztq6r" event={"ID":"5f229152-a987-497e-8777-937b4f6880d0","Type":"ContainerDied","Data":"bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3"} Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.298852 4834 generic.go:334] "Generic (PLEG): container finished" podID="835da3fd-0497-4072-9d76-122d19300787" containerID="3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5" exitCode=137 Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.298964 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5"} Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.299044 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"835da3fd-0497-4072-9d76-122d19300787","Type":"ContainerDied","Data":"e366ab495ac8bbb23d901159b4caa451f94ffa5685dde04cea167f4e96d7b1d8"} Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.299074 4834 scope.go:117] "RemoveContainer" containerID="3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.299354 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.343532 4834 scope.go:117] "RemoveContainer" containerID="fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.375727 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.391445 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.405350 4834 scope.go:117] "RemoveContainer" containerID="16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.429573 4834 scope.go:117] "RemoveContainer" containerID="537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.464979 4834 scope.go:117] "RemoveContainer" containerID="7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.532980 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ztq6r_5f229152-a987-497e-8777-937b4f6880d0/ovs-vswitchd/0.log" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.533766 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.536122 4834 scope.go:117] "RemoveContainer" containerID="9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.557490 4834 scope.go:117] "RemoveContainer" containerID="fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.567885 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-etc-ovs\") pod \"5f229152-a987-497e-8777-937b4f6880d0\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.567976 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f229152-a987-497e-8777-937b4f6880d0-scripts\") pod \"5f229152-a987-497e-8777-937b4f6880d0\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568011 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-run\") pod \"5f229152-a987-497e-8777-937b4f6880d0\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568014 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "5f229152-a987-497e-8777-937b4f6880d0" (UID: "5f229152-a987-497e-8777-937b4f6880d0"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568078 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-log\") pod \"5f229152-a987-497e-8777-937b4f6880d0\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568112 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9qtf\" (UniqueName: \"kubernetes.io/projected/5f229152-a987-497e-8777-937b4f6880d0-kube-api-access-x9qtf\") pod \"5f229152-a987-497e-8777-937b4f6880d0\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568135 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-log" (OuterVolumeSpecName: "var-log") pod "5f229152-a987-497e-8777-937b4f6880d0" (UID: "5f229152-a987-497e-8777-937b4f6880d0"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568155 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-lib\") pod \"5f229152-a987-497e-8777-937b4f6880d0\" (UID: \"5f229152-a987-497e-8777-937b4f6880d0\") " Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568120 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-run" (OuterVolumeSpecName: "var-run") pod "5f229152-a987-497e-8777-937b4f6880d0" (UID: "5f229152-a987-497e-8777-937b4f6880d0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568318 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-lib" (OuterVolumeSpecName: "var-lib") pod "5f229152-a987-497e-8777-937b4f6880d0" (UID: "5f229152-a987-497e-8777-937b4f6880d0"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568612 4834 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-lib\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568640 4834 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568654 4834 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.568676 4834 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f229152-a987-497e-8777-937b4f6880d0-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.569949 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f229152-a987-497e-8777-937b4f6880d0-scripts" (OuterVolumeSpecName: "scripts") pod "5f229152-a987-497e-8777-937b4f6880d0" (UID: "5f229152-a987-497e-8777-937b4f6880d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.583296 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f229152-a987-497e-8777-937b4f6880d0-kube-api-access-x9qtf" (OuterVolumeSpecName: "kube-api-access-x9qtf") pod "5f229152-a987-497e-8777-937b4f6880d0" (UID: "5f229152-a987-497e-8777-937b4f6880d0"). InnerVolumeSpecName "kube-api-access-x9qtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.593891 4834 scope.go:117] "RemoveContainer" containerID="b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.616530 4834 scope.go:117] "RemoveContainer" containerID="6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.637746 4834 scope.go:117] "RemoveContainer" containerID="a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.658506 4834 scope.go:117] "RemoveContainer" containerID="df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.670218 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9qtf\" (UniqueName: \"kubernetes.io/projected/5f229152-a987-497e-8777-937b4f6880d0-kube-api-access-x9qtf\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.670253 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f229152-a987-497e-8777-937b4f6880d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.685048 4834 scope.go:117] "RemoveContainer" containerID="0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.711748 4834 scope.go:117] "RemoveContainer" containerID="f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.744124 4834 scope.go:117] "RemoveContainer" containerID="5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.765329 4834 scope.go:117] "RemoveContainer" containerID="631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.792910 4834 scope.go:117] "RemoveContainer" containerID="3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.793763 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5\": container with ID starting with 3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5 not found: ID does not exist" containerID="3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.793888 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5"} err="failed to get container status \"3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5\": rpc error: code = NotFound desc = could not find container \"3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5\": container with ID starting with 3b6adbccbc8a93e9a9af3e3928be888408f10514cd37e7ba048249cc52d30dd5 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.794157 4834 scope.go:117] "RemoveContainer" containerID="fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.794871 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f\": container with ID starting with fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f not found: ID does not exist" containerID="fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.794909 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f"} err="failed to get container status \"fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f\": rpc error: code = NotFound desc = could not find container \"fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f\": container with ID starting with fabe25ad14932f25318d4ac7235dd59989485a64652693ad6c9543df59e04c7f not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.794949 4834 scope.go:117] "RemoveContainer" containerID="16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.795284 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047\": container with ID starting with 16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047 not found: ID does not exist" containerID="16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.795308 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047"} err="failed to get container status \"16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047\": rpc error: code = NotFound desc = could not find container \"16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047\": container with ID starting with 16c055f569b6c617a0871a52d9ef90d67182ab7adf1cdddbb3605b921917a047 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.795323 4834 scope.go:117] "RemoveContainer" containerID="537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.796055 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27\": container with ID starting with 537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27 not found: ID does not exist" containerID="537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.796087 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27"} err="failed to get container status \"537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27\": rpc error: code = NotFound desc = could not find container \"537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27\": container with ID starting with 537c6525a34fe4e30d1e52105c5ff5f5b0ae99519058faba89989c0a1568cb27 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.796105 4834 scope.go:117] "RemoveContainer" containerID="7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.796480 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01\": container with ID starting with 7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01 not found: ID does not exist" containerID="7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.796558 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01"} err="failed to get container status \"7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01\": rpc error: code = NotFound desc = could not find container \"7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01\": container with ID starting with 7825ad449f2f4d5f9c44e69607a340f29f942bb123d829be8af5579d45448a01 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.796608 4834 scope.go:117] "RemoveContainer" containerID="9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.797096 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc\": container with ID starting with 9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc not found: ID does not exist" containerID="9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.797131 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc"} err="failed to get container status \"9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc\": rpc error: code = NotFound desc = could not find container \"9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc\": container with ID starting with 9e81e166d92ae2b0dfaee5a04ca94b76a1affc0c404dd09be7a614974c5b98dc not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.797153 4834 scope.go:117] "RemoveContainer" containerID="fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.797637 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693\": container with ID starting with fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693 not found: ID does not exist" containerID="fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.797667 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693"} err="failed to get container status \"fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693\": rpc error: code = NotFound desc = could not find container \"fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693\": container with ID starting with fb292f7e25b1eac11f0aa24a29aac521520ac89d9430d4dd9149fccd054ba693 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.797688 4834 scope.go:117] "RemoveContainer" containerID="b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.798076 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785\": container with ID starting with b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785 not found: ID does not exist" containerID="b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.798105 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785"} err="failed to get container status \"b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785\": rpc error: code = NotFound desc = could not find container \"b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785\": container with ID starting with b36c7d3f0f2647f6bb5318fa161d92e87d4bdb0bf8a473e6665bcd58ea3e2785 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.798124 4834 scope.go:117] "RemoveContainer" containerID="6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.798406 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453\": container with ID starting with 6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453 not found: ID does not exist" containerID="6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.798437 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453"} err="failed to get container status \"6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453\": rpc error: code = NotFound desc = could not find container \"6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453\": container with ID starting with 6e582ba4db8bb196850c50bb6412f2e450836542970313164e2ac45934aa2453 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.798456 4834 scope.go:117] "RemoveContainer" containerID="a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.798880 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348\": container with ID starting with a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348 not found: ID does not exist" containerID="a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.798907 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348"} err="failed to get container status \"a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348\": rpc error: code = NotFound desc = could not find container \"a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348\": container with ID starting with a919357436c1e9bfcd2c1b30be862decb67a4aeb346e2686d33316e1e1871348 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.798950 4834 scope.go:117] "RemoveContainer" containerID="df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.799318 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5\": container with ID starting with df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5 not found: ID does not exist" containerID="df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.799355 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5"} err="failed to get container status \"df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5\": rpc error: code = NotFound desc = could not find container \"df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5\": container with ID starting with df8b3f35f2225c9dffc06b7807e97c8427473a93a7a25d3bbfb21529e5ce15c5 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.799375 4834 scope.go:117] "RemoveContainer" containerID="0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.799766 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b\": container with ID starting with 0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b not found: ID does not exist" containerID="0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.799792 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b"} err="failed to get container status \"0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b\": rpc error: code = NotFound desc = could not find container \"0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b\": container with ID starting with 0cddc41ecdc2a819959dc265f1dcac25c93b958fe2af9c9c72ecab52f4641c5b not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.799813 4834 scope.go:117] "RemoveContainer" containerID="f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.800191 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b\": container with ID starting with f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b not found: ID does not exist" containerID="f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.800219 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b"} err="failed to get container status \"f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b\": rpc error: code = NotFound desc = could not find container \"f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b\": container with ID starting with f4996714f941141aab2345159a8fe776fbc553a49fdbf672da20f371f2ca871b not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.800234 4834 scope.go:117] "RemoveContainer" containerID="5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.800465 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398\": container with ID starting with 5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398 not found: ID does not exist" containerID="5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.800486 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398"} err="failed to get container status \"5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398\": rpc error: code = NotFound desc = could not find container \"5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398\": container with ID starting with 5770971276ba43e340e1275e2d5847c7c37a062a8312a6de0b0386cbb8808398 not found: ID does not exist" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.800501 4834 scope.go:117] "RemoveContainer" containerID="631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314" Jan 21 14:56:42 crc kubenswrapper[4834]: E0121 14:56:42.801575 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314\": container with ID starting with 631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314 not found: ID does not exist" containerID="631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314" Jan 21 14:56:42 crc kubenswrapper[4834]: I0121 14:56:42.801630 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314"} err="failed to get container status \"631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314\": rpc error: code = NotFound desc = could not find container \"631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314\": container with ID starting with 631d80121c655d2b169c6f3fcf5aba023ad44937c96b6cfa4d00e9fa73a9c314 not found: ID does not exist" Jan 21 14:56:43 crc kubenswrapper[4834]: I0121 14:56:43.310213 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ztq6r_5f229152-a987-497e-8777-937b4f6880d0/ovs-vswitchd/0.log" Jan 21 14:56:43 crc kubenswrapper[4834]: I0121 14:56:43.312446 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ztq6r" event={"ID":"5f229152-a987-497e-8777-937b4f6880d0","Type":"ContainerDied","Data":"0b06e71e7b2f8158a53dec122fb2883c511b0fc8f3e7f6e5e914762326d14f17"} Jan 21 14:56:43 crc kubenswrapper[4834]: I0121 14:56:43.312488 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ztq6r" Jan 21 14:56:43 crc kubenswrapper[4834]: I0121 14:56:43.312502 4834 scope.go:117] "RemoveContainer" containerID="bf81b1dd5336aa6d359fd0ba3f50a3c73b4660bd906497d153862f5c10381da3" Jan 21 14:56:43 crc kubenswrapper[4834]: I0121 14:56:43.336286 4834 scope.go:117] "RemoveContainer" containerID="71279543c8f3bcfd53ee63c33e7b1fb02c6e37f1baca91f36d7c887ed6d43b3c" Jan 21 14:56:43 crc kubenswrapper[4834]: I0121 14:56:43.345494 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-ztq6r"] Jan 21 14:56:43 crc kubenswrapper[4834]: I0121 14:56:43.355957 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-ztq6r"] Jan 21 14:56:43 crc kubenswrapper[4834]: I0121 14:56:43.359842 4834 scope.go:117] "RemoveContainer" containerID="e670dfc9b5b6fa7161d55bec337b50e6a0762c64b164d107b62dcae1c0aacfd9" Jan 21 14:56:44 crc kubenswrapper[4834]: I0121 14:56:44.333285 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f229152-a987-497e-8777-937b4f6880d0" path="/var/lib/kubelet/pods/5f229152-a987-497e-8777-937b4f6880d0/volumes" Jan 21 14:56:44 crc kubenswrapper[4834]: I0121 14:56:44.334586 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="835da3fd-0497-4072-9d76-122d19300787" path="/var/lib/kubelet/pods/835da3fd-0497-4072-9d76-122d19300787/volumes" Jan 21 14:56:44 crc kubenswrapper[4834]: I0121 14:56:44.620520 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:44 crc kubenswrapper[4834]: I0121 14:56:44.620590 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:44 crc kubenswrapper[4834]: I0121 14:56:44.669350 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:45 crc kubenswrapper[4834]: I0121 14:56:45.380270 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:45 crc kubenswrapper[4834]: I0121 14:56:45.446798 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8qrb"] Jan 21 14:56:45 crc kubenswrapper[4834]: I0121 14:56:45.580811 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-69bb684bc8-6s7qv" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.152:8778/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:45 crc kubenswrapper[4834]: I0121 14:56:45.581291 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-69bb684bc8-6s7qv" podUID="46ef0752-abe1-465f-8b0b-77906b861c12" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.152:8778/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 14:56:47 crc kubenswrapper[4834]: I0121 14:56:47.325150 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:56:47 crc kubenswrapper[4834]: E0121 14:56:47.325984 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:56:47 crc kubenswrapper[4834]: I0121 14:56:47.357380 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d8qrb" podUID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerName="registry-server" containerID="cri-o://64af7a59b17fc68dbd006c245a17e5ed4b3ad7c98fcbc0f97a975e93ec8dd1c8" gracePeriod=2 Jan 21 14:56:47 crc kubenswrapper[4834]: I0121 14:56:47.370665 4834 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod28da9710-d30d-4fe5-ab02-aadd9b32ab1e"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod28da9710-d30d-4fe5-ab02-aadd9b32ab1e] : Timed out while waiting for systemd to remove kubepods-besteffort-pod28da9710_d30d_4fe5_ab02_aadd9b32ab1e.slice" Jan 21 14:56:47 crc kubenswrapper[4834]: I0121 14:56:47.395834 4834 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod1e74faea-a792-455c-a253-7012f98c6acf"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod1e74faea-a792-455c-a253-7012f98c6acf] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1e74faea_a792_455c_a253_7012f98c6acf.slice" Jan 21 14:56:48 crc kubenswrapper[4834]: I0121 14:56:48.368983 4834 generic.go:334] "Generic (PLEG): container finished" podID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerID="64af7a59b17fc68dbd006c245a17e5ed4b3ad7c98fcbc0f97a975e93ec8dd1c8" exitCode=0 Jan 21 14:56:48 crc kubenswrapper[4834]: I0121 14:56:48.369169 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8qrb" event={"ID":"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac","Type":"ContainerDied","Data":"64af7a59b17fc68dbd006c245a17e5ed4b3ad7c98fcbc0f97a975e93ec8dd1c8"} Jan 21 14:56:48 crc kubenswrapper[4834]: I0121 14:56:48.885542 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.073852 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-utilities\") pod \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.073941 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpx5m\" (UniqueName: \"kubernetes.io/projected/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-kube-api-access-bpx5m\") pod \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.074024 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-catalog-content\") pod \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\" (UID: \"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac\") " Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.075017 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-utilities" (OuterVolumeSpecName: "utilities") pod "14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" (UID: "14d58cb6-7221-4dc8-8ffa-552aa6dff1ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.080857 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-kube-api-access-bpx5m" (OuterVolumeSpecName: "kube-api-access-bpx5m") pod "14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" (UID: "14d58cb6-7221-4dc8-8ffa-552aa6dff1ac"). InnerVolumeSpecName "kube-api-access-bpx5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.136257 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" (UID: "14d58cb6-7221-4dc8-8ffa-552aa6dff1ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.175104 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.175155 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpx5m\" (UniqueName: \"kubernetes.io/projected/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-kube-api-access-bpx5m\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.175168 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.380217 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8qrb" event={"ID":"14d58cb6-7221-4dc8-8ffa-552aa6dff1ac","Type":"ContainerDied","Data":"1801adbba1d4cd117979683785c60587e8a93c290fbf0050314177e8df0966a8"} Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.380271 4834 scope.go:117] "RemoveContainer" containerID="64af7a59b17fc68dbd006c245a17e5ed4b3ad7c98fcbc0f97a975e93ec8dd1c8" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.380391 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8qrb" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.407524 4834 scope.go:117] "RemoveContainer" containerID="acc303420528299d3424a9554c1ca62742753bb22140879238dcebdf5b8d1cd9" Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.412207 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8qrb"] Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.421133 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d8qrb"] Jan 21 14:56:49 crc kubenswrapper[4834]: I0121 14:56:49.437693 4834 scope.go:117] "RemoveContainer" containerID="e8d819826c8dbe9687e5d0fb0e9fdf4fcb5b8d60afb4142e6a4dbf46282d6fca" Jan 21 14:56:50 crc kubenswrapper[4834]: I0121 14:56:50.337451 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" path="/var/lib/kubelet/pods/14d58cb6-7221-4dc8-8ffa-552aa6dff1ac/volumes" Jan 21 14:56:58 crc kubenswrapper[4834]: I0121 14:56:58.324741 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:56:58 crc kubenswrapper[4834]: E0121 14:56:58.325966 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.038261 4834 scope.go:117] "RemoveContainer" containerID="e6ec65d0f480c3b4dab94f41c704198e1bbe527e959876c8f6b812b0f9ebc505" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.071505 4834 scope.go:117] "RemoveContainer" containerID="4584afc588265a66c1cc48f6be939cf379f4c49c72c881e4d5864241a68a8ecd" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.099671 4834 scope.go:117] "RemoveContainer" containerID="26c58187ecc7217104ccff0eda5c2f6560785955665bcfa555a8b04789f332b8" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.137386 4834 scope.go:117] "RemoveContainer" containerID="ac1ec9e0c07a964592444b918be2b80a83221c681fdd18500124a549b54bbe57" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.158364 4834 scope.go:117] "RemoveContainer" containerID="e7d2fdd8aa6d58481bcc56fd724bf04ab9b6133ef98b9686261c001278873999" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.187160 4834 scope.go:117] "RemoveContainer" containerID="44a1ebe7b3d2a58ed234b940dcad849407babcc59811e4b7c8c1488e3e574a78" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.223641 4834 scope.go:117] "RemoveContainer" containerID="91ae6eb26515c13884731bd4fedbd55169de35554d0a031e75ea1836e5893a76" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.252172 4834 scope.go:117] "RemoveContainer" containerID="9cdf46c68fa323acb850f14c8117071740959cc8e11f908c325e927865ffe66b" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.278961 4834 scope.go:117] "RemoveContainer" containerID="eec097a377919f1aac1f1e4555fbca25a4859225cb53121ee4397bb159e45c71" Jan 21 14:57:03 crc kubenswrapper[4834]: I0121 14:57:03.300746 4834 scope.go:117] "RemoveContainer" containerID="a11fb495757824d138ac77a3fef9d20b66153b3238dbfc25af87a935ad9bf9f6" Jan 21 14:57:10 crc kubenswrapper[4834]: I0121 14:57:10.325430 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:57:10 crc kubenswrapper[4834]: E0121 14:57:10.326303 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:57:22 crc kubenswrapper[4834]: I0121 14:57:22.325330 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:57:22 crc kubenswrapper[4834]: E0121 14:57:22.326136 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:57:37 crc kubenswrapper[4834]: I0121 14:57:37.324961 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:57:37 crc kubenswrapper[4834]: E0121 14:57:37.326049 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:57:49 crc kubenswrapper[4834]: I0121 14:57:49.325644 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:57:49 crc kubenswrapper[4834]: E0121 14:57:49.326735 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.136520 4834 scope.go:117] "RemoveContainer" containerID="794676590f991a7ecdd6da4e45f2a58ee67264e8e686dd305a3cf7b02ffbff05" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.193890 4834 scope.go:117] "RemoveContainer" containerID="7450baf66953d71a3a8bda46db6cf374715a858243decdf6ef7582dec8f2a5b5" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.212430 4834 scope.go:117] "RemoveContainer" containerID="383c05e156769911de64cce006521de1a0f7d2414c6973c7a0ab36a32d8a2828" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.252180 4834 scope.go:117] "RemoveContainer" containerID="ff5f1996000fd126aeb8d7e3152bd522162d4bf437847b5af7b3bc4102ac459e" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.293066 4834 scope.go:117] "RemoveContainer" containerID="de2118a3c8532e3ed45fb4f40a6d807fde2e842586bd7cfe0287f3de051a0325" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.312197 4834 scope.go:117] "RemoveContainer" containerID="7f37d746c1e23773ff5721e4e997cd19227ff7f8c3be0289160cf0c721ed6064" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.328047 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:58:04 crc kubenswrapper[4834]: E0121 14:58:04.328408 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.384491 4834 scope.go:117] "RemoveContainer" containerID="faf65ed7d70df2b0daa57f403ad209c034d9c6c11576362ea79a3947583a86aa" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.419072 4834 scope.go:117] "RemoveContainer" containerID="5edbdeba0c4bdc90ae3b3facbfb2bc42083627d57cdf54fede1603200c831d27" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.442351 4834 scope.go:117] "RemoveContainer" containerID="eaed596558af2085827f8e984d860d3b77386778257ee96ee20a0f7e9596633a" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.466915 4834 scope.go:117] "RemoveContainer" containerID="a33ded5cfaf8c189abcd7f8aed73029a4a13ca9d9df07bab8cbd4ec86ae598cb" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.502405 4834 scope.go:117] "RemoveContainer" containerID="019916aa32a8d05882acc74c8e5940afedccdc288cba42e68ac13dd506dc1fa9" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.539323 4834 scope.go:117] "RemoveContainer" containerID="43faf0c44ed1cad6a34d28fbbeba8aa4bf7e8d3146039b96edf40f928b38b322" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.595883 4834 scope.go:117] "RemoveContainer" containerID="9e0eb4b587e1c4a06b4592384cbe89012e12116150f42a3c1da82d69989a8bf5" Jan 21 14:58:04 crc kubenswrapper[4834]: I0121 14:58:04.637650 4834 scope.go:117] "RemoveContainer" containerID="42991a0486e1e8bb2be287a8f47cbf50c0e3abd28979cca6ae904abd97a4f38c" Jan 21 14:58:15 crc kubenswrapper[4834]: I0121 14:58:15.325124 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:58:15 crc kubenswrapper[4834]: E0121 14:58:15.326477 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:58:26 crc kubenswrapper[4834]: I0121 14:58:26.325169 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:58:26 crc kubenswrapper[4834]: E0121 14:58:26.325849 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:58:37 crc kubenswrapper[4834]: I0121 14:58:37.325706 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:58:37 crc kubenswrapper[4834]: E0121 14:58:37.326569 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:58:50 crc kubenswrapper[4834]: I0121 14:58:50.325020 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:58:50 crc kubenswrapper[4834]: E0121 14:58:50.325891 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:59:02 crc kubenswrapper[4834]: I0121 14:59:02.324761 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:59:02 crc kubenswrapper[4834]: E0121 14:59:02.325662 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:59:04 crc kubenswrapper[4834]: I0121 14:59:04.849546 4834 scope.go:117] "RemoveContainer" containerID="d103dd9297c1d2baee70d0c4f7d9b78e333f4f0afc7a6a14c41398f6488db835" Jan 21 14:59:04 crc kubenswrapper[4834]: I0121 14:59:04.870227 4834 scope.go:117] "RemoveContainer" containerID="0aa4a9c8d5305f703392b0cef4cde0df1d32d6bc28fb27efa810c7fbaf538f33" Jan 21 14:59:04 crc kubenswrapper[4834]: I0121 14:59:04.896217 4834 scope.go:117] "RemoveContainer" containerID="9e7c05197361670761eb6161d2d7eaa43bcc0eac69d4f23d319dcb70cf798a41" Jan 21 14:59:04 crc kubenswrapper[4834]: I0121 14:59:04.944778 4834 scope.go:117] "RemoveContainer" containerID="238e9220c679c8123c7b1326da2371187ff7c09c525c7f859da1fe9356a819b6" Jan 21 14:59:04 crc kubenswrapper[4834]: I0121 14:59:04.964741 4834 scope.go:117] "RemoveContainer" containerID="88a31cfa48c85b0165aa88edab452893c52414c0c101f61a9dfa58c255844c9d" Jan 21 14:59:13 crc kubenswrapper[4834]: I0121 14:59:13.325050 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:59:13 crc kubenswrapper[4834]: E0121 14:59:13.326028 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:59:25 crc kubenswrapper[4834]: I0121 14:59:25.324331 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:59:25 crc kubenswrapper[4834]: E0121 14:59:25.325303 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:59:38 crc kubenswrapper[4834]: I0121 14:59:38.324802 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:59:38 crc kubenswrapper[4834]: E0121 14:59:38.326020 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 14:59:50 crc kubenswrapper[4834]: I0121 14:59:50.324310 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 14:59:50 crc kubenswrapper[4834]: E0121 14:59:50.326673 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.156111 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx"] Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157224 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-httpd" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157245 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-httpd" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157258 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-auditor" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157266 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-auditor" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157276 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-auditor" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157284 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-auditor" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157298 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-updater" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157305 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-updater" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157319 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157326 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157333 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-expirer" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157340 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-expirer" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157353 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157359 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157368 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-reaper" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157375 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-reaper" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157390 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-updater" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157397 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-updater" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157412 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-replicator" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157419 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-replicator" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157430 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-api" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157437 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-api" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157451 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157460 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-server" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157469 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-replicator" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157476 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-replicator" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157488 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-replicator" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157496 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-replicator" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157511 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="rsync" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157519 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="rsync" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157527 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerName="extract-utilities" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157536 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerName="extract-utilities" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157552 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="swift-recon-cron" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157560 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="swift-recon-cron" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157569 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-auditor" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157577 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-auditor" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157585 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerName="extract-content" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157593 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerName="extract-content" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157603 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157611 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-server" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157621 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerName="registry-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157629 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerName="registry-server" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157639 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157646 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-server" Jan 21 15:00:00 crc kubenswrapper[4834]: E0121 15:00:00.157656 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server-init" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157663 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server-init" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157829 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovsdb-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157849 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-updater" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157860 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-replicator" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157869 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-auditor" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157879 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157888 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d58cb6-7221-4dc8-8ffa-552aa6dff1ac" containerName="registry-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157907 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="swift-recon-cron" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157917 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-replicator" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157947 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f229152-a987-497e-8777-937b4f6880d0" containerName="ovs-vswitchd" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157960 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-expirer" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157975 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.157989 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-replicator" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.158002 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-server" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.158012 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-api" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.158021 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="rsync" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.158029 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="account-reaper" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.158042 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-auditor" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.158052 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="container-auditor" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.158059 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="507328e4-20c4-4e84-b781-e4889419607e" containerName="neutron-httpd" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.158067 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="835da3fd-0497-4072-9d76-122d19300787" containerName="object-updater" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.158831 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.161704 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.161734 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.179608 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx"] Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.273015 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d059b54f-a3da-493d-84fb-9dd98acbe092-secret-volume\") pod \"collect-profiles-29483460-mwdvx\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.273142 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d059b54f-a3da-493d-84fb-9dd98acbe092-config-volume\") pod \"collect-profiles-29483460-mwdvx\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.273178 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k95g\" (UniqueName: \"kubernetes.io/projected/d059b54f-a3da-493d-84fb-9dd98acbe092-kube-api-access-5k95g\") pod \"collect-profiles-29483460-mwdvx\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.375101 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d059b54f-a3da-493d-84fb-9dd98acbe092-secret-volume\") pod \"collect-profiles-29483460-mwdvx\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.375550 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d059b54f-a3da-493d-84fb-9dd98acbe092-config-volume\") pod \"collect-profiles-29483460-mwdvx\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.376313 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k95g\" (UniqueName: \"kubernetes.io/projected/d059b54f-a3da-493d-84fb-9dd98acbe092-kube-api-access-5k95g\") pod \"collect-profiles-29483460-mwdvx\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.376774 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d059b54f-a3da-493d-84fb-9dd98acbe092-config-volume\") pod \"collect-profiles-29483460-mwdvx\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.393629 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d059b54f-a3da-493d-84fb-9dd98acbe092-secret-volume\") pod \"collect-profiles-29483460-mwdvx\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.402670 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k95g\" (UniqueName: \"kubernetes.io/projected/d059b54f-a3da-493d-84fb-9dd98acbe092-kube-api-access-5k95g\") pod \"collect-profiles-29483460-mwdvx\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.483603 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.946906 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx"] Jan 21 15:00:00 crc kubenswrapper[4834]: I0121 15:00:00.996567 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" event={"ID":"d059b54f-a3da-493d-84fb-9dd98acbe092","Type":"ContainerStarted","Data":"3cbc5119f2b341b6c6fa1e4bcf80dd893c7126768ec4e1f6148bb596688fb4a7"} Jan 21 15:00:02 crc kubenswrapper[4834]: I0121 15:00:02.006787 4834 generic.go:334] "Generic (PLEG): container finished" podID="d059b54f-a3da-493d-84fb-9dd98acbe092" containerID="6f6da3293e2874a884f7df825aecd3f3bf71a0d820a28db01169c516b2077b94" exitCode=0 Jan 21 15:00:02 crc kubenswrapper[4834]: I0121 15:00:02.006859 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" event={"ID":"d059b54f-a3da-493d-84fb-9dd98acbe092","Type":"ContainerDied","Data":"6f6da3293e2874a884f7df825aecd3f3bf71a0d820a28db01169c516b2077b94"} Jan 21 15:00:02 crc kubenswrapper[4834]: I0121 15:00:02.325944 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 15:00:02 crc kubenswrapper[4834]: E0121 15:00:02.326384 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.290283 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.322819 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d059b54f-a3da-493d-84fb-9dd98acbe092-config-volume\") pod \"d059b54f-a3da-493d-84fb-9dd98acbe092\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.322913 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d059b54f-a3da-493d-84fb-9dd98acbe092-secret-volume\") pod \"d059b54f-a3da-493d-84fb-9dd98acbe092\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.323037 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k95g\" (UniqueName: \"kubernetes.io/projected/d059b54f-a3da-493d-84fb-9dd98acbe092-kube-api-access-5k95g\") pod \"d059b54f-a3da-493d-84fb-9dd98acbe092\" (UID: \"d059b54f-a3da-493d-84fb-9dd98acbe092\") " Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.324486 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d059b54f-a3da-493d-84fb-9dd98acbe092-config-volume" (OuterVolumeSpecName: "config-volume") pod "d059b54f-a3da-493d-84fb-9dd98acbe092" (UID: "d059b54f-a3da-493d-84fb-9dd98acbe092"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.329300 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d059b54f-a3da-493d-84fb-9dd98acbe092-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d059b54f-a3da-493d-84fb-9dd98acbe092" (UID: "d059b54f-a3da-493d-84fb-9dd98acbe092"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.330255 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d059b54f-a3da-493d-84fb-9dd98acbe092-kube-api-access-5k95g" (OuterVolumeSpecName: "kube-api-access-5k95g") pod "d059b54f-a3da-493d-84fb-9dd98acbe092" (UID: "d059b54f-a3da-493d-84fb-9dd98acbe092"). InnerVolumeSpecName "kube-api-access-5k95g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.425384 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d059b54f-a3da-493d-84fb-9dd98acbe092-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.425501 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d059b54f-a3da-493d-84fb-9dd98acbe092-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:00:03 crc kubenswrapper[4834]: I0121 15:00:03.425535 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k95g\" (UniqueName: \"kubernetes.io/projected/d059b54f-a3da-493d-84fb-9dd98acbe092-kube-api-access-5k95g\") on node \"crc\" DevicePath \"\"" Jan 21 15:00:04 crc kubenswrapper[4834]: I0121 15:00:04.024270 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" event={"ID":"d059b54f-a3da-493d-84fb-9dd98acbe092","Type":"ContainerDied","Data":"3cbc5119f2b341b6c6fa1e4bcf80dd893c7126768ec4e1f6148bb596688fb4a7"} Jan 21 15:00:04 crc kubenswrapper[4834]: I0121 15:00:04.024323 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cbc5119f2b341b6c6fa1e4bcf80dd893c7126768ec4e1f6148bb596688fb4a7" Jan 21 15:00:04 crc kubenswrapper[4834]: I0121 15:00:04.024345 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.045588 4834 scope.go:117] "RemoveContainer" containerID="812c3bd340e74d50c9d33e4f333c1e389145b76516d8d553ad34bad8cd6449a8" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.064780 4834 scope.go:117] "RemoveContainer" containerID="db044be1aefe255c10ce8baeb7d8226a8e00d8f2d732191c49cc1dce3a593cd6" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.086253 4834 scope.go:117] "RemoveContainer" containerID="4a6ed2df30e010201b9a36b01247708744061f14a48a1665bb13fdeb43d9d9bb" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.104231 4834 scope.go:117] "RemoveContainer" containerID="f79820e45fbe2f14701ead04fcf982473e4bd920f4dc695e1566caa2639d510f" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.155607 4834 scope.go:117] "RemoveContainer" containerID="b47ed0f6c8104a3b7953464e5976417f153bf0a91bf10723168f9b3f52ee2cdb" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.177503 4834 scope.go:117] "RemoveContainer" containerID="e2e5f7a516e7bf1e00be01f5c942a902e3d70b5381d1e5c1009574470b8a35af" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.208593 4834 scope.go:117] "RemoveContainer" containerID="bd82e25130556e19b66daf7d96d6d14fa88862acddfe74bca857b1a15c8bb86c" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.227750 4834 scope.go:117] "RemoveContainer" containerID="1da4708d65368042feab6ed33ff9ce319ef9460c7feb86ef2980a44d86a2a1dc" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.254111 4834 scope.go:117] "RemoveContainer" containerID="83df69fcbb26d2aebb7daf416be409e283ad79ee46ccc601f9324e32b0922177" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.277174 4834 scope.go:117] "RemoveContainer" containerID="2f3932789a99d5835acfc99b1868b189cbc0e881b542fe12d0bd0edb9430be56" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.296690 4834 scope.go:117] "RemoveContainer" containerID="022b4dcefb10e6849119a581d8764d474f56076acec2b622fe0ded8cd0b6117c" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.317909 4834 scope.go:117] "RemoveContainer" containerID="acf275885a87766e22e50a5e1ce91edb61ca986bfae256e0a29a5ab7684a2f57" Jan 21 15:00:05 crc kubenswrapper[4834]: I0121 15:00:05.337441 4834 scope.go:117] "RemoveContainer" containerID="3091fd5a5129a158db5ebfe8939671c846d9d2c5198ff4166ff9ed88062b7d32" Jan 21 15:00:17 crc kubenswrapper[4834]: I0121 15:00:17.324636 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 15:00:17 crc kubenswrapper[4834]: E0121 15:00:17.325357 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:00:31 crc kubenswrapper[4834]: I0121 15:00:31.324426 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 15:00:31 crc kubenswrapper[4834]: E0121 15:00:31.325073 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:00:44 crc kubenswrapper[4834]: I0121 15:00:44.329088 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 15:00:44 crc kubenswrapper[4834]: E0121 15:00:44.329984 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:00:59 crc kubenswrapper[4834]: I0121 15:00:59.325289 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 15:00:59 crc kubenswrapper[4834]: E0121 15:00:59.326574 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:01:05 crc kubenswrapper[4834]: I0121 15:01:05.480227 4834 scope.go:117] "RemoveContainer" containerID="47a76e82bb6ec47433c8cfb185300918cf777eb2d63a773f8288426f63642a13" Jan 21 15:01:05 crc kubenswrapper[4834]: I0121 15:01:05.541531 4834 scope.go:117] "RemoveContainer" containerID="d4fa94c6733544ea0cf3bc614fc78eb3be964784628806f3011d3ecf53e4fcf5" Jan 21 15:01:10 crc kubenswrapper[4834]: I0121 15:01:10.325203 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 15:01:10 crc kubenswrapper[4834]: E0121 15:01:10.325959 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:01:25 crc kubenswrapper[4834]: I0121 15:01:25.325108 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 15:01:26 crc kubenswrapper[4834]: I0121 15:01:26.664542 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"7a85ae5372e101ee6fa1c58cc15515e6947ebd2293de699276a1c4ae24441662"} Jan 21 15:02:05 crc kubenswrapper[4834]: I0121 15:02:05.618276 4834 scope.go:117] "RemoveContainer" containerID="ca88e7b806a1891e2d431e5d68c2b01022bb4519e1cee82c14b740ffd923edfc" Jan 21 15:03:47 crc kubenswrapper[4834]: I0121 15:03:47.114446 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:03:47 crc kubenswrapper[4834]: I0121 15:03:47.115447 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.113945 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.114903 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.255984 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5qlfm"] Jan 21 15:04:17 crc kubenswrapper[4834]: E0121 15:04:17.256404 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d059b54f-a3da-493d-84fb-9dd98acbe092" containerName="collect-profiles" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.256422 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d059b54f-a3da-493d-84fb-9dd98acbe092" containerName="collect-profiles" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.256572 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="d059b54f-a3da-493d-84fb-9dd98acbe092" containerName="collect-profiles" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.257696 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.276684 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qlfm"] Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.418571 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnnz2\" (UniqueName: \"kubernetes.io/projected/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-kube-api-access-qnnz2\") pod \"redhat-operators-5qlfm\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.418680 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-catalog-content\") pod \"redhat-operators-5qlfm\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.418712 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-utilities\") pod \"redhat-operators-5qlfm\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.520557 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnnz2\" (UniqueName: \"kubernetes.io/projected/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-kube-api-access-qnnz2\") pod \"redhat-operators-5qlfm\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.520653 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-catalog-content\") pod \"redhat-operators-5qlfm\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.520684 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-utilities\") pod \"redhat-operators-5qlfm\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.521455 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-catalog-content\") pod \"redhat-operators-5qlfm\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.521568 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-utilities\") pod \"redhat-operators-5qlfm\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.552334 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnnz2\" (UniqueName: \"kubernetes.io/projected/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-kube-api-access-qnnz2\") pod \"redhat-operators-5qlfm\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.598021 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:17 crc kubenswrapper[4834]: I0121 15:04:17.891560 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qlfm"] Jan 21 15:04:18 crc kubenswrapper[4834]: I0121 15:04:18.268077 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qlfm" event={"ID":"5611ca34-3cb1-4797-bd2a-cc19d27db6b8","Type":"ContainerStarted","Data":"2b254acf6ef859f92312cf3b24cba916c917ce6a77f253d87366f38d36bf6924"} Jan 21 15:04:18 crc kubenswrapper[4834]: I0121 15:04:18.268500 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qlfm" event={"ID":"5611ca34-3cb1-4797-bd2a-cc19d27db6b8","Type":"ContainerStarted","Data":"71d5453939298b0dd5dfee8a125431d58ad9e3665188da0f40b8a46f0600162c"} Jan 21 15:04:18 crc kubenswrapper[4834]: I0121 15:04:18.271059 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:04:19 crc kubenswrapper[4834]: I0121 15:04:19.278142 4834 generic.go:334] "Generic (PLEG): container finished" podID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerID="2b254acf6ef859f92312cf3b24cba916c917ce6a77f253d87366f38d36bf6924" exitCode=0 Jan 21 15:04:19 crc kubenswrapper[4834]: I0121 15:04:19.278255 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qlfm" event={"ID":"5611ca34-3cb1-4797-bd2a-cc19d27db6b8","Type":"ContainerDied","Data":"2b254acf6ef859f92312cf3b24cba916c917ce6a77f253d87366f38d36bf6924"} Jan 21 15:04:20 crc kubenswrapper[4834]: I0121 15:04:20.287990 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qlfm" event={"ID":"5611ca34-3cb1-4797-bd2a-cc19d27db6b8","Type":"ContainerStarted","Data":"e9e396576bccb6e4f281c371c913b1dd159f73a0698338de4ae64f16aff454ab"} Jan 21 15:04:21 crc kubenswrapper[4834]: I0121 15:04:21.299344 4834 generic.go:334] "Generic (PLEG): container finished" podID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerID="e9e396576bccb6e4f281c371c913b1dd159f73a0698338de4ae64f16aff454ab" exitCode=0 Jan 21 15:04:21 crc kubenswrapper[4834]: I0121 15:04:21.299420 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qlfm" event={"ID":"5611ca34-3cb1-4797-bd2a-cc19d27db6b8","Type":"ContainerDied","Data":"e9e396576bccb6e4f281c371c913b1dd159f73a0698338de4ae64f16aff454ab"} Jan 21 15:04:22 crc kubenswrapper[4834]: I0121 15:04:22.309676 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qlfm" event={"ID":"5611ca34-3cb1-4797-bd2a-cc19d27db6b8","Type":"ContainerStarted","Data":"27d7a475dfeb3e27fc926c1d3fbd08592482f2b8b20e4fccbaba627db2c65a20"} Jan 21 15:04:22 crc kubenswrapper[4834]: I0121 15:04:22.344452 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5qlfm" podStartSLOduration=1.947910778 podStartE2EDuration="5.344412846s" podCreationTimestamp="2026-01-21 15:04:17 +0000 UTC" firstStartedPulling="2026-01-21 15:04:18.270672019 +0000 UTC m=+2004.245021064" lastFinishedPulling="2026-01-21 15:04:21.667174087 +0000 UTC m=+2007.641523132" observedRunningTime="2026-01-21 15:04:22.336045347 +0000 UTC m=+2008.310394402" watchObservedRunningTime="2026-01-21 15:04:22.344412846 +0000 UTC m=+2008.318761891" Jan 21 15:04:27 crc kubenswrapper[4834]: I0121 15:04:27.598951 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:27 crc kubenswrapper[4834]: I0121 15:04:27.600192 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:28 crc kubenswrapper[4834]: I0121 15:04:28.646067 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qlfm" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerName="registry-server" probeResult="failure" output=< Jan 21 15:04:28 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 15:04:28 crc kubenswrapper[4834]: > Jan 21 15:04:37 crc kubenswrapper[4834]: I0121 15:04:37.642250 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:37 crc kubenswrapper[4834]: I0121 15:04:37.695076 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:37 crc kubenswrapper[4834]: I0121 15:04:37.883512 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qlfm"] Jan 21 15:04:39 crc kubenswrapper[4834]: I0121 15:04:39.473387 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5qlfm" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerName="registry-server" containerID="cri-o://27d7a475dfeb3e27fc926c1d3fbd08592482f2b8b20e4fccbaba627db2c65a20" gracePeriod=2 Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.485222 4834 generic.go:334] "Generic (PLEG): container finished" podID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerID="27d7a475dfeb3e27fc926c1d3fbd08592482f2b8b20e4fccbaba627db2c65a20" exitCode=0 Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.485302 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qlfm" event={"ID":"5611ca34-3cb1-4797-bd2a-cc19d27db6b8","Type":"ContainerDied","Data":"27d7a475dfeb3e27fc926c1d3fbd08592482f2b8b20e4fccbaba627db2c65a20"} Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.569154 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.635386 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-utilities\") pod \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.636144 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnnz2\" (UniqueName: \"kubernetes.io/projected/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-kube-api-access-qnnz2\") pod \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.636236 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-catalog-content\") pod \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\" (UID: \"5611ca34-3cb1-4797-bd2a-cc19d27db6b8\") " Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.636873 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-utilities" (OuterVolumeSpecName: "utilities") pod "5611ca34-3cb1-4797-bd2a-cc19d27db6b8" (UID: "5611ca34-3cb1-4797-bd2a-cc19d27db6b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.643735 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-kube-api-access-qnnz2" (OuterVolumeSpecName: "kube-api-access-qnnz2") pod "5611ca34-3cb1-4797-bd2a-cc19d27db6b8" (UID: "5611ca34-3cb1-4797-bd2a-cc19d27db6b8"). InnerVolumeSpecName "kube-api-access-qnnz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.738540 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnnz2\" (UniqueName: \"kubernetes.io/projected/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-kube-api-access-qnnz2\") on node \"crc\" DevicePath \"\"" Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.738596 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.786444 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5611ca34-3cb1-4797-bd2a-cc19d27db6b8" (UID: "5611ca34-3cb1-4797-bd2a-cc19d27db6b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:04:40 crc kubenswrapper[4834]: I0121 15:04:40.839955 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5611ca34-3cb1-4797-bd2a-cc19d27db6b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:04:41 crc kubenswrapper[4834]: I0121 15:04:41.495705 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qlfm" event={"ID":"5611ca34-3cb1-4797-bd2a-cc19d27db6b8","Type":"ContainerDied","Data":"71d5453939298b0dd5dfee8a125431d58ad9e3665188da0f40b8a46f0600162c"} Jan 21 15:04:41 crc kubenswrapper[4834]: I0121 15:04:41.495774 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qlfm" Jan 21 15:04:41 crc kubenswrapper[4834]: I0121 15:04:41.495787 4834 scope.go:117] "RemoveContainer" containerID="27d7a475dfeb3e27fc926c1d3fbd08592482f2b8b20e4fccbaba627db2c65a20" Jan 21 15:04:41 crc kubenswrapper[4834]: I0121 15:04:41.518397 4834 scope.go:117] "RemoveContainer" containerID="e9e396576bccb6e4f281c371c913b1dd159f73a0698338de4ae64f16aff454ab" Jan 21 15:04:41 crc kubenswrapper[4834]: I0121 15:04:41.541845 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qlfm"] Jan 21 15:04:41 crc kubenswrapper[4834]: I0121 15:04:41.549165 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5qlfm"] Jan 21 15:04:41 crc kubenswrapper[4834]: I0121 15:04:41.557419 4834 scope.go:117] "RemoveContainer" containerID="2b254acf6ef859f92312cf3b24cba916c917ce6a77f253d87366f38d36bf6924" Jan 21 15:04:42 crc kubenswrapper[4834]: I0121 15:04:42.336399 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" path="/var/lib/kubelet/pods/5611ca34-3cb1-4797-bd2a-cc19d27db6b8/volumes" Jan 21 15:04:47 crc kubenswrapper[4834]: I0121 15:04:47.114162 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:04:47 crc kubenswrapper[4834]: I0121 15:04:47.114668 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:04:47 crc kubenswrapper[4834]: I0121 15:04:47.114740 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:04:47 crc kubenswrapper[4834]: I0121 15:04:47.115679 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a85ae5372e101ee6fa1c58cc15515e6947ebd2293de699276a1c4ae24441662"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:04:47 crc kubenswrapper[4834]: I0121 15:04:47.115756 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://7a85ae5372e101ee6fa1c58cc15515e6947ebd2293de699276a1c4ae24441662" gracePeriod=600 Jan 21 15:04:48 crc kubenswrapper[4834]: I0121 15:04:48.562003 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="7a85ae5372e101ee6fa1c58cc15515e6947ebd2293de699276a1c4ae24441662" exitCode=0 Jan 21 15:04:48 crc kubenswrapper[4834]: I0121 15:04:48.562068 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"7a85ae5372e101ee6fa1c58cc15515e6947ebd2293de699276a1c4ae24441662"} Jan 21 15:04:48 crc kubenswrapper[4834]: I0121 15:04:48.562449 4834 scope.go:117] "RemoveContainer" containerID="78d5e8619b2874e48cc46c73db85d992020ef74e9902b0a37e72f9a04580f551" Jan 21 15:04:49 crc kubenswrapper[4834]: I0121 15:04:49.570903 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a"} Jan 21 15:07:12 crc kubenswrapper[4834]: I0121 15:07:12.878498 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wpjjb"] Jan 21 15:07:12 crc kubenswrapper[4834]: E0121 15:07:12.879518 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerName="extract-content" Jan 21 15:07:12 crc kubenswrapper[4834]: I0121 15:07:12.879534 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerName="extract-content" Jan 21 15:07:12 crc kubenswrapper[4834]: E0121 15:07:12.879577 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerName="extract-utilities" Jan 21 15:07:12 crc kubenswrapper[4834]: I0121 15:07:12.879585 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerName="extract-utilities" Jan 21 15:07:12 crc kubenswrapper[4834]: E0121 15:07:12.879599 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerName="registry-server" Jan 21 15:07:12 crc kubenswrapper[4834]: I0121 15:07:12.879607 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerName="registry-server" Jan 21 15:07:12 crc kubenswrapper[4834]: I0121 15:07:12.879735 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5611ca34-3cb1-4797-bd2a-cc19d27db6b8" containerName="registry-server" Jan 21 15:07:12 crc kubenswrapper[4834]: I0121 15:07:12.881188 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:12 crc kubenswrapper[4834]: I0121 15:07:12.893961 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpjjb"] Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.055186 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k4zx\" (UniqueName: \"kubernetes.io/projected/8659b66b-c5ab-45eb-bc1b-be1da812a115-kube-api-access-2k4zx\") pod \"redhat-marketplace-wpjjb\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.055405 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-catalog-content\") pod \"redhat-marketplace-wpjjb\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.055569 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-utilities\") pod \"redhat-marketplace-wpjjb\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.078458 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r746p"] Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.080503 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.098867 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r746p"] Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.159075 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-utilities\") pod \"redhat-marketplace-wpjjb\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.159180 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k4zx\" (UniqueName: \"kubernetes.io/projected/8659b66b-c5ab-45eb-bc1b-be1da812a115-kube-api-access-2k4zx\") pod \"redhat-marketplace-wpjjb\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.159228 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-catalog-content\") pod \"redhat-marketplace-wpjjb\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.159672 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-utilities\") pod \"redhat-marketplace-wpjjb\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.161471 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-catalog-content\") pod \"redhat-marketplace-wpjjb\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.181653 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k4zx\" (UniqueName: \"kubernetes.io/projected/8659b66b-c5ab-45eb-bc1b-be1da812a115-kube-api-access-2k4zx\") pod \"redhat-marketplace-wpjjb\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.203457 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.260838 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr5m2\" (UniqueName: \"kubernetes.io/projected/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-kube-api-access-jr5m2\") pod \"community-operators-r746p\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.260946 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-catalog-content\") pod \"community-operators-r746p\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.261214 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-utilities\") pod \"community-operators-r746p\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.362807 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-catalog-content\") pod \"community-operators-r746p\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.363167 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-utilities\") pod \"community-operators-r746p\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.363205 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr5m2\" (UniqueName: \"kubernetes.io/projected/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-kube-api-access-jr5m2\") pod \"community-operators-r746p\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.363672 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-catalog-content\") pod \"community-operators-r746p\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.363780 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-utilities\") pod \"community-operators-r746p\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.386272 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr5m2\" (UniqueName: \"kubernetes.io/projected/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-kube-api-access-jr5m2\") pod \"community-operators-r746p\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:13 crc kubenswrapper[4834]: I0121 15:07:13.399149 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:14 crc kubenswrapper[4834]: I0121 15:07:14.003869 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r746p"] Jan 21 15:07:14 crc kubenswrapper[4834]: I0121 15:07:14.016509 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpjjb"] Jan 21 15:07:14 crc kubenswrapper[4834]: I0121 15:07:14.505836 4834 generic.go:334] "Generic (PLEG): container finished" podID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerID="870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba" exitCode=0 Jan 21 15:07:14 crc kubenswrapper[4834]: I0121 15:07:14.506114 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r746p" event={"ID":"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9","Type":"ContainerDied","Data":"870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba"} Jan 21 15:07:14 crc kubenswrapper[4834]: I0121 15:07:14.506165 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r746p" event={"ID":"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9","Type":"ContainerStarted","Data":"6d0cac4704fe9eca615b163e9ac14aea25ef0341519a0a3caf708e5e0b02e956"} Jan 21 15:07:14 crc kubenswrapper[4834]: I0121 15:07:14.510707 4834 generic.go:334] "Generic (PLEG): container finished" podID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerID="2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d" exitCode=0 Jan 21 15:07:14 crc kubenswrapper[4834]: I0121 15:07:14.510754 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpjjb" event={"ID":"8659b66b-c5ab-45eb-bc1b-be1da812a115","Type":"ContainerDied","Data":"2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d"} Jan 21 15:07:14 crc kubenswrapper[4834]: I0121 15:07:14.510781 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpjjb" event={"ID":"8659b66b-c5ab-45eb-bc1b-be1da812a115","Type":"ContainerStarted","Data":"16eda2ef02ac9cd40b3acbe52e163ab249cc6880413fb87be94b771a5d166d4c"} Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.481207 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v5wr7"] Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.483527 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.497067 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5wr7"] Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.635670 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-utilities\") pod \"certified-operators-v5wr7\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.636234 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7p7\" (UniqueName: \"kubernetes.io/projected/c79e4b7f-364f-40b0-87df-f09267c34d78-kube-api-access-fz7p7\") pod \"certified-operators-v5wr7\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.636324 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-catalog-content\") pod \"certified-operators-v5wr7\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.737561 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-catalog-content\") pod \"certified-operators-v5wr7\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.737682 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-utilities\") pod \"certified-operators-v5wr7\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.737729 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7p7\" (UniqueName: \"kubernetes.io/projected/c79e4b7f-364f-40b0-87df-f09267c34d78-kube-api-access-fz7p7\") pod \"certified-operators-v5wr7\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.738182 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-catalog-content\") pod \"certified-operators-v5wr7\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.738361 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-utilities\") pod \"certified-operators-v5wr7\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.762798 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7p7\" (UniqueName: \"kubernetes.io/projected/c79e4b7f-364f-40b0-87df-f09267c34d78-kube-api-access-fz7p7\") pod \"certified-operators-v5wr7\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:15 crc kubenswrapper[4834]: I0121 15:07:15.815705 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:16 crc kubenswrapper[4834]: I0121 15:07:16.348698 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5wr7"] Jan 21 15:07:16 crc kubenswrapper[4834]: I0121 15:07:16.530636 4834 generic.go:334] "Generic (PLEG): container finished" podID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerID="000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba" exitCode=0 Jan 21 15:07:16 crc kubenswrapper[4834]: I0121 15:07:16.530719 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r746p" event={"ID":"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9","Type":"ContainerDied","Data":"000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba"} Jan 21 15:07:16 crc kubenswrapper[4834]: I0121 15:07:16.533353 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5wr7" event={"ID":"c79e4b7f-364f-40b0-87df-f09267c34d78","Type":"ContainerStarted","Data":"f5c745ddbe13fd7983e943e1e01bba92e9f55d6aca604e82b6d01355fc7ee618"} Jan 21 15:07:16 crc kubenswrapper[4834]: I0121 15:07:16.533405 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5wr7" event={"ID":"c79e4b7f-364f-40b0-87df-f09267c34d78","Type":"ContainerStarted","Data":"af2304090a196070751ce180681c82fe7e9aeb885746e03f45c680d6dc7a4e21"} Jan 21 15:07:16 crc kubenswrapper[4834]: I0121 15:07:16.540981 4834 generic.go:334] "Generic (PLEG): container finished" podID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerID="d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a" exitCode=0 Jan 21 15:07:16 crc kubenswrapper[4834]: I0121 15:07:16.541101 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpjjb" event={"ID":"8659b66b-c5ab-45eb-bc1b-be1da812a115","Type":"ContainerDied","Data":"d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a"} Jan 21 15:07:17 crc kubenswrapper[4834]: I0121 15:07:17.115098 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:07:17 crc kubenswrapper[4834]: I0121 15:07:17.115316 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:07:17 crc kubenswrapper[4834]: I0121 15:07:17.553555 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpjjb" event={"ID":"8659b66b-c5ab-45eb-bc1b-be1da812a115","Type":"ContainerStarted","Data":"35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46"} Jan 21 15:07:17 crc kubenswrapper[4834]: I0121 15:07:17.557438 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r746p" event={"ID":"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9","Type":"ContainerStarted","Data":"babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d"} Jan 21 15:07:17 crc kubenswrapper[4834]: I0121 15:07:17.559386 4834 generic.go:334] "Generic (PLEG): container finished" podID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerID="f5c745ddbe13fd7983e943e1e01bba92e9f55d6aca604e82b6d01355fc7ee618" exitCode=0 Jan 21 15:07:17 crc kubenswrapper[4834]: I0121 15:07:17.559451 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5wr7" event={"ID":"c79e4b7f-364f-40b0-87df-f09267c34d78","Type":"ContainerDied","Data":"f5c745ddbe13fd7983e943e1e01bba92e9f55d6aca604e82b6d01355fc7ee618"} Jan 21 15:07:17 crc kubenswrapper[4834]: I0121 15:07:17.583791 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wpjjb" podStartSLOduration=3.141482984 podStartE2EDuration="5.583765586s" podCreationTimestamp="2026-01-21 15:07:12 +0000 UTC" firstStartedPulling="2026-01-21 15:07:14.512263766 +0000 UTC m=+2180.486612811" lastFinishedPulling="2026-01-21 15:07:16.954546358 +0000 UTC m=+2182.928895413" observedRunningTime="2026-01-21 15:07:17.574098026 +0000 UTC m=+2183.548447091" watchObservedRunningTime="2026-01-21 15:07:17.583765586 +0000 UTC m=+2183.558114621" Jan 21 15:07:17 crc kubenswrapper[4834]: I0121 15:07:17.618842 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r746p" podStartSLOduration=2.085349182 podStartE2EDuration="4.618811764s" podCreationTimestamp="2026-01-21 15:07:13 +0000 UTC" firstStartedPulling="2026-01-21 15:07:14.508371885 +0000 UTC m=+2180.482720930" lastFinishedPulling="2026-01-21 15:07:17.041834467 +0000 UTC m=+2183.016183512" observedRunningTime="2026-01-21 15:07:17.612886121 +0000 UTC m=+2183.587235176" watchObservedRunningTime="2026-01-21 15:07:17.618811764 +0000 UTC m=+2183.593160819" Jan 21 15:07:22 crc kubenswrapper[4834]: I0121 15:07:22.608743 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5wr7" event={"ID":"c79e4b7f-364f-40b0-87df-f09267c34d78","Type":"ContainerStarted","Data":"93c2b8dd965c003dccb3176561bb12b96ba3702e33b1cf6ddd98e41cb68fea33"} Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.204234 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.204314 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.257388 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.400304 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.400380 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.443605 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.620875 4834 generic.go:334] "Generic (PLEG): container finished" podID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerID="93c2b8dd965c003dccb3176561bb12b96ba3702e33b1cf6ddd98e41cb68fea33" exitCode=0 Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.621120 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5wr7" event={"ID":"c79e4b7f-364f-40b0-87df-f09267c34d78","Type":"ContainerDied","Data":"93c2b8dd965c003dccb3176561bb12b96ba3702e33b1cf6ddd98e41cb68fea33"} Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.668180 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:23 crc kubenswrapper[4834]: I0121 15:07:23.669970 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:24 crc kubenswrapper[4834]: I0121 15:07:24.631534 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5wr7" event={"ID":"c79e4b7f-364f-40b0-87df-f09267c34d78","Type":"ContainerStarted","Data":"9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4"} Jan 21 15:07:24 crc kubenswrapper[4834]: I0121 15:07:24.655943 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v5wr7" podStartSLOduration=2.821987841 podStartE2EDuration="9.655889538s" podCreationTimestamp="2026-01-21 15:07:15 +0000 UTC" firstStartedPulling="2026-01-21 15:07:17.561536917 +0000 UTC m=+2183.535885972" lastFinishedPulling="2026-01-21 15:07:24.395438604 +0000 UTC m=+2190.369787669" observedRunningTime="2026-01-21 15:07:24.650546962 +0000 UTC m=+2190.624896017" watchObservedRunningTime="2026-01-21 15:07:24.655889538 +0000 UTC m=+2190.630238583" Jan 21 15:07:25 crc kubenswrapper[4834]: I0121 15:07:25.671326 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpjjb"] Jan 21 15:07:25 crc kubenswrapper[4834]: I0121 15:07:25.671691 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wpjjb" podUID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerName="registry-server" containerID="cri-o://35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46" gracePeriod=2 Jan 21 15:07:25 crc kubenswrapper[4834]: I0121 15:07:25.816005 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:25 crc kubenswrapper[4834]: I0121 15:07:25.816596 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.605850 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.648786 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k4zx\" (UniqueName: \"kubernetes.io/projected/8659b66b-c5ab-45eb-bc1b-be1da812a115-kube-api-access-2k4zx\") pod \"8659b66b-c5ab-45eb-bc1b-be1da812a115\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.648840 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-utilities\") pod \"8659b66b-c5ab-45eb-bc1b-be1da812a115\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.648914 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-catalog-content\") pod \"8659b66b-c5ab-45eb-bc1b-be1da812a115\" (UID: \"8659b66b-c5ab-45eb-bc1b-be1da812a115\") " Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.650224 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-utilities" (OuterVolumeSpecName: "utilities") pod "8659b66b-c5ab-45eb-bc1b-be1da812a115" (UID: "8659b66b-c5ab-45eb-bc1b-be1da812a115"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.652161 4834 generic.go:334] "Generic (PLEG): container finished" podID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerID="35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46" exitCode=0 Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.653161 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wpjjb" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.653536 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpjjb" event={"ID":"8659b66b-c5ab-45eb-bc1b-be1da812a115","Type":"ContainerDied","Data":"35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46"} Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.653608 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wpjjb" event={"ID":"8659b66b-c5ab-45eb-bc1b-be1da812a115","Type":"ContainerDied","Data":"16eda2ef02ac9cd40b3acbe52e163ab249cc6880413fb87be94b771a5d166d4c"} Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.653702 4834 scope.go:117] "RemoveContainer" containerID="35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.657859 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8659b66b-c5ab-45eb-bc1b-be1da812a115-kube-api-access-2k4zx" (OuterVolumeSpecName: "kube-api-access-2k4zx") pod "8659b66b-c5ab-45eb-bc1b-be1da812a115" (UID: "8659b66b-c5ab-45eb-bc1b-be1da812a115"). InnerVolumeSpecName "kube-api-access-2k4zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.685295 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8659b66b-c5ab-45eb-bc1b-be1da812a115" (UID: "8659b66b-c5ab-45eb-bc1b-be1da812a115"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.695401 4834 scope.go:117] "RemoveContainer" containerID="d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.718269 4834 scope.go:117] "RemoveContainer" containerID="2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.750642 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.750697 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k4zx\" (UniqueName: \"kubernetes.io/projected/8659b66b-c5ab-45eb-bc1b-be1da812a115-kube-api-access-2k4zx\") on node \"crc\" DevicePath \"\"" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.750710 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8659b66b-c5ab-45eb-bc1b-be1da812a115-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.754307 4834 scope.go:117] "RemoveContainer" containerID="35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46" Jan 21 15:07:26 crc kubenswrapper[4834]: E0121 15:07:26.754893 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46\": container with ID starting with 35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46 not found: ID does not exist" containerID="35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.754964 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46"} err="failed to get container status \"35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46\": rpc error: code = NotFound desc = could not find container \"35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46\": container with ID starting with 35996be1a475882ec46dc147229a172ed40163a30f100066c1073abb86d72b46 not found: ID does not exist" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.754995 4834 scope.go:117] "RemoveContainer" containerID="d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a" Jan 21 15:07:26 crc kubenswrapper[4834]: E0121 15:07:26.755587 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a\": container with ID starting with d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a not found: ID does not exist" containerID="d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.755642 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a"} err="failed to get container status \"d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a\": rpc error: code = NotFound desc = could not find container \"d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a\": container with ID starting with d833ed94683525dbd9bc0ff0d6b2c70e9f71d1a9e49d22b379466bd9efdbe85a not found: ID does not exist" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.755692 4834 scope.go:117] "RemoveContainer" containerID="2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d" Jan 21 15:07:26 crc kubenswrapper[4834]: E0121 15:07:26.756537 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d\": container with ID starting with 2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d not found: ID does not exist" containerID="2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.756569 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d"} err="failed to get container status \"2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d\": rpc error: code = NotFound desc = could not find container \"2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d\": container with ID starting with 2853f221de53940c005fdf4df3bcbc0d7f1ac620054604161115335191b9715d not found: ID does not exist" Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.868640 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v5wr7" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="registry-server" probeResult="failure" output=< Jan 21 15:07:26 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 15:07:26 crc kubenswrapper[4834]: > Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.989947 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpjjb"] Jan 21 15:07:26 crc kubenswrapper[4834]: I0121 15:07:26.996597 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wpjjb"] Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.070868 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r746p"] Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.071275 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r746p" podUID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerName="registry-server" containerID="cri-o://babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d" gracePeriod=2 Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.544527 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.562824 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr5m2\" (UniqueName: \"kubernetes.io/projected/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-kube-api-access-jr5m2\") pod \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.562959 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-utilities\") pod \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.563000 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-catalog-content\") pod \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\" (UID: \"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9\") " Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.564159 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-utilities" (OuterVolumeSpecName: "utilities") pod "cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" (UID: "cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.577236 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-kube-api-access-jr5m2" (OuterVolumeSpecName: "kube-api-access-jr5m2") pod "cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" (UID: "cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9"). InnerVolumeSpecName "kube-api-access-jr5m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.631265 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" (UID: "cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.664223 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr5m2\" (UniqueName: \"kubernetes.io/projected/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-kube-api-access-jr5m2\") on node \"crc\" DevicePath \"\"" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.664258 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r746p" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.664199 4834 generic.go:334] "Generic (PLEG): container finished" podID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerID="babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d" exitCode=0 Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.664268 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.664389 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.664259 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r746p" event={"ID":"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9","Type":"ContainerDied","Data":"babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d"} Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.664545 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r746p" event={"ID":"cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9","Type":"ContainerDied","Data":"6d0cac4704fe9eca615b163e9ac14aea25ef0341519a0a3caf708e5e0b02e956"} Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.664591 4834 scope.go:117] "RemoveContainer" containerID="babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.699494 4834 scope.go:117] "RemoveContainer" containerID="000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.700282 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r746p"] Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.705658 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r746p"] Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.718858 4834 scope.go:117] "RemoveContainer" containerID="870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.735241 4834 scope.go:117] "RemoveContainer" containerID="babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d" Jan 21 15:07:27 crc kubenswrapper[4834]: E0121 15:07:27.735901 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d\": container with ID starting with babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d not found: ID does not exist" containerID="babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.735989 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d"} err="failed to get container status \"babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d\": rpc error: code = NotFound desc = could not find container \"babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d\": container with ID starting with babb0cba21c0b0f67f3d3fb582422518536812f5882aabe78f25713603837c2d not found: ID does not exist" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.736029 4834 scope.go:117] "RemoveContainer" containerID="000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba" Jan 21 15:07:27 crc kubenswrapper[4834]: E0121 15:07:27.736550 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba\": container with ID starting with 000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba not found: ID does not exist" containerID="000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.736592 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba"} err="failed to get container status \"000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba\": rpc error: code = NotFound desc = could not find container \"000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba\": container with ID starting with 000a22bfb114d9a347a8e3140468a4e8a704d7dc9f95074f6a86dce19b5838ba not found: ID does not exist" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.736625 4834 scope.go:117] "RemoveContainer" containerID="870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba" Jan 21 15:07:27 crc kubenswrapper[4834]: E0121 15:07:27.737129 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba\": container with ID starting with 870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba not found: ID does not exist" containerID="870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba" Jan 21 15:07:27 crc kubenswrapper[4834]: I0121 15:07:27.737166 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba"} err="failed to get container status \"870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba\": rpc error: code = NotFound desc = could not find container \"870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba\": container with ID starting with 870c4e9180bf330acc6c3100f9ccfedee16a7180655804d03c6609f7028ad1ba not found: ID does not exist" Jan 21 15:07:28 crc kubenswrapper[4834]: I0121 15:07:28.337806 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8659b66b-c5ab-45eb-bc1b-be1da812a115" path="/var/lib/kubelet/pods/8659b66b-c5ab-45eb-bc1b-be1da812a115/volumes" Jan 21 15:07:28 crc kubenswrapper[4834]: I0121 15:07:28.339288 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" path="/var/lib/kubelet/pods/cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9/volumes" Jan 21 15:07:35 crc kubenswrapper[4834]: I0121 15:07:35.867751 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:35 crc kubenswrapper[4834]: I0121 15:07:35.920591 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:07:35 crc kubenswrapper[4834]: I0121 15:07:35.999184 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5wr7"] Jan 21 15:07:36 crc kubenswrapper[4834]: I0121 15:07:36.108628 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2qf4g"] Jan 21 15:07:36 crc kubenswrapper[4834]: I0121 15:07:36.109320 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2qf4g" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerName="registry-server" containerID="cri-o://4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76" gracePeriod=2 Jan 21 15:07:36 crc kubenswrapper[4834]: E0121 15:07:36.820317 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76 is running failed: container process not found" containerID="4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:07:36 crc kubenswrapper[4834]: E0121 15:07:36.821057 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76 is running failed: container process not found" containerID="4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:07:36 crc kubenswrapper[4834]: E0121 15:07:36.821432 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76 is running failed: container process not found" containerID="4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:07:36 crc kubenswrapper[4834]: E0121 15:07:36.821554 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-2qf4g" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerName="registry-server" Jan 21 15:07:37 crc kubenswrapper[4834]: I0121 15:07:37.749612 4834 generic.go:334] "Generic (PLEG): container finished" podID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerID="4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76" exitCode=0 Jan 21 15:07:37 crc kubenswrapper[4834]: I0121 15:07:37.749702 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qf4g" event={"ID":"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e","Type":"ContainerDied","Data":"4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76"} Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.399405 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.469187 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbrl5\" (UniqueName: \"kubernetes.io/projected/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-kube-api-access-mbrl5\") pod \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.469330 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-catalog-content\") pod \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.469398 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-utilities\") pod \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\" (UID: \"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e\") " Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.470258 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-utilities" (OuterVolumeSpecName: "utilities") pod "25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" (UID: "25a6ac4b-5bd7-468a-888d-0b7fcc3d290e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.476121 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-kube-api-access-mbrl5" (OuterVolumeSpecName: "kube-api-access-mbrl5") pod "25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" (UID: "25a6ac4b-5bd7-468a-888d-0b7fcc3d290e"). InnerVolumeSpecName "kube-api-access-mbrl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.519423 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" (UID: "25a6ac4b-5bd7-468a-888d-0b7fcc3d290e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.575639 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.575685 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.575699 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbrl5\" (UniqueName: \"kubernetes.io/projected/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e-kube-api-access-mbrl5\") on node \"crc\" DevicePath \"\"" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.765649 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2qf4g" event={"ID":"25a6ac4b-5bd7-468a-888d-0b7fcc3d290e","Type":"ContainerDied","Data":"2e413d42bf396d4a7d00f02209427f0ed2f91d82ac22126c4940b58559da1980"} Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.765737 4834 scope.go:117] "RemoveContainer" containerID="4e386659416c33e303f861c476acff70c9e9455b7e102d06ff3b8b7fc0728d76" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.765753 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2qf4g" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.808573 4834 scope.go:117] "RemoveContainer" containerID="b5d2234e1282b9ce49d43a134d7b29d570b777c9cb0823b879a5b8374592c12d" Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.812182 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2qf4g"] Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.818809 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2qf4g"] Jan 21 15:07:38 crc kubenswrapper[4834]: I0121 15:07:38.840214 4834 scope.go:117] "RemoveContainer" containerID="baba8b3471447007de786be507a8db435ca790e3533a61bfaacfc23219fcfe7d" Jan 21 15:07:40 crc kubenswrapper[4834]: I0121 15:07:40.333461 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" path="/var/lib/kubelet/pods/25a6ac4b-5bd7-468a-888d-0b7fcc3d290e/volumes" Jan 21 15:07:47 crc kubenswrapper[4834]: I0121 15:07:47.114390 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:07:47 crc kubenswrapper[4834]: I0121 15:07:47.115346 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:08:17 crc kubenswrapper[4834]: I0121 15:08:17.114319 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:08:17 crc kubenswrapper[4834]: I0121 15:08:17.115023 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:08:17 crc kubenswrapper[4834]: I0121 15:08:17.115086 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:08:17 crc kubenswrapper[4834]: I0121 15:08:17.115811 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:08:17 crc kubenswrapper[4834]: I0121 15:08:17.115870 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" gracePeriod=600 Jan 21 15:08:17 crc kubenswrapper[4834]: E0121 15:08:17.745059 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:08:18 crc kubenswrapper[4834]: I0121 15:08:18.079295 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" exitCode=0 Jan 21 15:08:18 crc kubenswrapper[4834]: I0121 15:08:18.079366 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a"} Jan 21 15:08:18 crc kubenswrapper[4834]: I0121 15:08:18.079403 4834 scope.go:117] "RemoveContainer" containerID="7a85ae5372e101ee6fa1c58cc15515e6947ebd2293de699276a1c4ae24441662" Jan 21 15:08:18 crc kubenswrapper[4834]: I0121 15:08:18.080104 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:08:18 crc kubenswrapper[4834]: E0121 15:08:18.080436 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:08:29 crc kubenswrapper[4834]: I0121 15:08:29.325232 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:08:29 crc kubenswrapper[4834]: E0121 15:08:29.326156 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:08:40 crc kubenswrapper[4834]: I0121 15:08:40.325599 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:08:40 crc kubenswrapper[4834]: E0121 15:08:40.326891 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:08:55 crc kubenswrapper[4834]: I0121 15:08:55.324679 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:08:55 crc kubenswrapper[4834]: E0121 15:08:55.326415 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:09:07 crc kubenswrapper[4834]: I0121 15:09:07.324525 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:09:07 crc kubenswrapper[4834]: E0121 15:09:07.325267 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:09:19 crc kubenswrapper[4834]: I0121 15:09:19.324358 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:09:19 crc kubenswrapper[4834]: E0121 15:09:19.325228 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:09:33 crc kubenswrapper[4834]: I0121 15:09:33.324489 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:09:33 crc kubenswrapper[4834]: E0121 15:09:33.325481 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:09:44 crc kubenswrapper[4834]: I0121 15:09:44.328763 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:09:44 crc kubenswrapper[4834]: E0121 15:09:44.329869 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:09:58 crc kubenswrapper[4834]: I0121 15:09:58.325861 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:09:58 crc kubenswrapper[4834]: E0121 15:09:58.326733 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:10:13 crc kubenswrapper[4834]: I0121 15:10:13.326372 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:10:13 crc kubenswrapper[4834]: E0121 15:10:13.327818 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:10:25 crc kubenswrapper[4834]: I0121 15:10:25.325584 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:10:25 crc kubenswrapper[4834]: E0121 15:10:25.326576 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:10:36 crc kubenswrapper[4834]: I0121 15:10:36.325732 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:10:36 crc kubenswrapper[4834]: E0121 15:10:36.327177 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:10:50 crc kubenswrapper[4834]: I0121 15:10:50.324906 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:10:50 crc kubenswrapper[4834]: E0121 15:10:50.326184 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:11:02 crc kubenswrapper[4834]: I0121 15:11:02.324414 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:11:02 crc kubenswrapper[4834]: E0121 15:11:02.325680 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:11:16 crc kubenswrapper[4834]: I0121 15:11:16.325074 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:11:16 crc kubenswrapper[4834]: E0121 15:11:16.325843 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:11:31 crc kubenswrapper[4834]: I0121 15:11:31.325586 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:11:31 crc kubenswrapper[4834]: E0121 15:11:31.326763 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:11:46 crc kubenswrapper[4834]: I0121 15:11:46.324406 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:11:46 crc kubenswrapper[4834]: E0121 15:11:46.325253 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:12:01 crc kubenswrapper[4834]: I0121 15:12:01.324596 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:12:01 crc kubenswrapper[4834]: E0121 15:12:01.325910 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:12:14 crc kubenswrapper[4834]: I0121 15:12:14.330607 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:12:14 crc kubenswrapper[4834]: E0121 15:12:14.331560 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:12:27 crc kubenswrapper[4834]: I0121 15:12:27.325107 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:12:27 crc kubenswrapper[4834]: E0121 15:12:27.326128 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:12:39 crc kubenswrapper[4834]: I0121 15:12:39.325080 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:12:39 crc kubenswrapper[4834]: E0121 15:12:39.326260 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:12:50 crc kubenswrapper[4834]: I0121 15:12:50.325842 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:12:50 crc kubenswrapper[4834]: E0121 15:12:50.326723 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:13:02 crc kubenswrapper[4834]: I0121 15:13:02.324303 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:13:02 crc kubenswrapper[4834]: E0121 15:13:02.325404 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:13:16 crc kubenswrapper[4834]: I0121 15:13:16.324894 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:13:16 crc kubenswrapper[4834]: E0121 15:13:16.325819 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:13:29 crc kubenswrapper[4834]: I0121 15:13:29.325187 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:13:29 crc kubenswrapper[4834]: I0121 15:13:29.837037 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"801f20bc8a5062f15f647f4378d846ce37f6c31931336fad30ace7f7068b64b4"} Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.161294 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw"] Jan 21 15:15:00 crc kubenswrapper[4834]: E0121 15:15:00.164135 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.164230 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4834]: E0121 15:15:00.164295 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.164349 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4834]: E0121 15:15:00.164416 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.164485 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4834]: E0121 15:15:00.164552 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.164618 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4834]: E0121 15:15:00.164696 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.164750 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4834]: E0121 15:15:00.165530 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.165633 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4834]: E0121 15:15:00.165696 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.165789 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4834]: E0121 15:15:00.165851 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.165917 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerName="extract-content" Jan 21 15:15:00 crc kubenswrapper[4834]: E0121 15:15:00.165994 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.166053 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerName="extract-utilities" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.166329 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8659b66b-c5ab-45eb-bc1b-be1da812a115" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.166410 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9ff13b-1f85-4d9f-8e0b-d8bddc2131b9" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.166474 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a6ac4b-5bd7-468a-888d-0b7fcc3d290e" containerName="registry-server" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.167232 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.172437 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.172730 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.180798 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw"] Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.277186 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6efbad13-9170-43d9-b945-b022f283ef27-config-volume\") pod \"collect-profiles-29483475-ln2xw\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.277301 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-829k2\" (UniqueName: \"kubernetes.io/projected/6efbad13-9170-43d9-b945-b022f283ef27-kube-api-access-829k2\") pod \"collect-profiles-29483475-ln2xw\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.277333 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6efbad13-9170-43d9-b945-b022f283ef27-secret-volume\") pod \"collect-profiles-29483475-ln2xw\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.378856 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6efbad13-9170-43d9-b945-b022f283ef27-config-volume\") pod \"collect-profiles-29483475-ln2xw\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.378955 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-829k2\" (UniqueName: \"kubernetes.io/projected/6efbad13-9170-43d9-b945-b022f283ef27-kube-api-access-829k2\") pod \"collect-profiles-29483475-ln2xw\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.378986 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6efbad13-9170-43d9-b945-b022f283ef27-secret-volume\") pod \"collect-profiles-29483475-ln2xw\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.380203 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6efbad13-9170-43d9-b945-b022f283ef27-config-volume\") pod \"collect-profiles-29483475-ln2xw\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.386873 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6efbad13-9170-43d9-b945-b022f283ef27-secret-volume\") pod \"collect-profiles-29483475-ln2xw\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.398406 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-829k2\" (UniqueName: \"kubernetes.io/projected/6efbad13-9170-43d9-b945-b022f283ef27-kube-api-access-829k2\") pod \"collect-profiles-29483475-ln2xw\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.502724 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:00 crc kubenswrapper[4834]: I0121 15:15:00.752015 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw"] Jan 21 15:15:01 crc kubenswrapper[4834]: I0121 15:15:01.707283 4834 generic.go:334] "Generic (PLEG): container finished" podID="6efbad13-9170-43d9-b945-b022f283ef27" containerID="89b80a3d7d218b2b4f3a28be4e22ce842691c5afe1428895e94ea3cfdeafa253" exitCode=0 Jan 21 15:15:01 crc kubenswrapper[4834]: I0121 15:15:01.707360 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" event={"ID":"6efbad13-9170-43d9-b945-b022f283ef27","Type":"ContainerDied","Data":"89b80a3d7d218b2b4f3a28be4e22ce842691c5afe1428895e94ea3cfdeafa253"} Jan 21 15:15:01 crc kubenswrapper[4834]: I0121 15:15:01.707898 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" event={"ID":"6efbad13-9170-43d9-b945-b022f283ef27","Type":"ContainerStarted","Data":"e1439d60090e7953a10b256bc582f98a196da55c6232995872230b4cc1e5cc4a"} Jan 21 15:15:02 crc kubenswrapper[4834]: I0121 15:15:02.982567 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.132508 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6efbad13-9170-43d9-b945-b022f283ef27-config-volume\") pod \"6efbad13-9170-43d9-b945-b022f283ef27\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.132664 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6efbad13-9170-43d9-b945-b022f283ef27-secret-volume\") pod \"6efbad13-9170-43d9-b945-b022f283ef27\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.132839 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-829k2\" (UniqueName: \"kubernetes.io/projected/6efbad13-9170-43d9-b945-b022f283ef27-kube-api-access-829k2\") pod \"6efbad13-9170-43d9-b945-b022f283ef27\" (UID: \"6efbad13-9170-43d9-b945-b022f283ef27\") " Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.133286 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6efbad13-9170-43d9-b945-b022f283ef27-config-volume" (OuterVolumeSpecName: "config-volume") pod "6efbad13-9170-43d9-b945-b022f283ef27" (UID: "6efbad13-9170-43d9-b945-b022f283ef27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.138159 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6efbad13-9170-43d9-b945-b022f283ef27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6efbad13-9170-43d9-b945-b022f283ef27" (UID: "6efbad13-9170-43d9-b945-b022f283ef27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.138169 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6efbad13-9170-43d9-b945-b022f283ef27-kube-api-access-829k2" (OuterVolumeSpecName: "kube-api-access-829k2") pod "6efbad13-9170-43d9-b945-b022f283ef27" (UID: "6efbad13-9170-43d9-b945-b022f283ef27"). InnerVolumeSpecName "kube-api-access-829k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.235307 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6efbad13-9170-43d9-b945-b022f283ef27-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.235388 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6efbad13-9170-43d9-b945-b022f283ef27-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.235402 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-829k2\" (UniqueName: \"kubernetes.io/projected/6efbad13-9170-43d9-b945-b022f283ef27-kube-api-access-829k2\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.724447 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" event={"ID":"6efbad13-9170-43d9-b945-b022f283ef27","Type":"ContainerDied","Data":"e1439d60090e7953a10b256bc582f98a196da55c6232995872230b4cc1e5cc4a"} Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.724498 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1439d60090e7953a10b256bc582f98a196da55c6232995872230b4cc1e5cc4a" Jan 21 15:15:03 crc kubenswrapper[4834]: I0121 15:15:03.724589 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw" Jan 21 15:15:04 crc kubenswrapper[4834]: I0121 15:15:04.065833 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv"] Jan 21 15:15:04 crc kubenswrapper[4834]: I0121 15:15:04.071264 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483430-cdmzv"] Jan 21 15:15:04 crc kubenswrapper[4834]: I0121 15:15:04.337817 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ce7851-62b5-4cb5-b7d4-2e03f1606cb0" path="/var/lib/kubelet/pods/98ce7851-62b5-4cb5-b7d4-2e03f1606cb0/volumes" Jan 21 15:15:05 crc kubenswrapper[4834]: I0121 15:15:05.967576 4834 scope.go:117] "RemoveContainer" containerID="06ff54c098b9a4b91dcba9432ccda9554b0ca2ff3172e1fd2c288224300e2693" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.604762 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-89qzd"] Jan 21 15:15:16 crc kubenswrapper[4834]: E0121 15:15:16.605854 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6efbad13-9170-43d9-b945-b022f283ef27" containerName="collect-profiles" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.605873 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6efbad13-9170-43d9-b945-b022f283ef27" containerName="collect-profiles" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.606152 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6efbad13-9170-43d9-b945-b022f283ef27" containerName="collect-profiles" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.607986 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.614641 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-89qzd"] Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.781134 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-catalog-content\") pod \"redhat-operators-89qzd\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.781446 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-utilities\") pod \"redhat-operators-89qzd\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.781574 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk8n5\" (UniqueName: \"kubernetes.io/projected/03e5a5eb-c2ff-4037-8378-004f49ca33cc-kube-api-access-wk8n5\") pod \"redhat-operators-89qzd\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.883388 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-catalog-content\") pod \"redhat-operators-89qzd\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.883497 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-utilities\") pod \"redhat-operators-89qzd\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.883533 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk8n5\" (UniqueName: \"kubernetes.io/projected/03e5a5eb-c2ff-4037-8378-004f49ca33cc-kube-api-access-wk8n5\") pod \"redhat-operators-89qzd\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.884286 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-catalog-content\") pod \"redhat-operators-89qzd\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.884318 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-utilities\") pod \"redhat-operators-89qzd\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.906162 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk8n5\" (UniqueName: \"kubernetes.io/projected/03e5a5eb-c2ff-4037-8378-004f49ca33cc-kube-api-access-wk8n5\") pod \"redhat-operators-89qzd\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:16 crc kubenswrapper[4834]: I0121 15:15:16.959843 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:17 crc kubenswrapper[4834]: I0121 15:15:17.441130 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-89qzd"] Jan 21 15:15:17 crc kubenswrapper[4834]: I0121 15:15:17.852578 4834 generic.go:334] "Generic (PLEG): container finished" podID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerID="aed26cb2b934d640921d03a404e3a898d76ae66ed1dfe96dd045252135f87a6d" exitCode=0 Jan 21 15:15:17 crc kubenswrapper[4834]: I0121 15:15:17.853017 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89qzd" event={"ID":"03e5a5eb-c2ff-4037-8378-004f49ca33cc","Type":"ContainerDied","Data":"aed26cb2b934d640921d03a404e3a898d76ae66ed1dfe96dd045252135f87a6d"} Jan 21 15:15:17 crc kubenswrapper[4834]: I0121 15:15:17.853063 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89qzd" event={"ID":"03e5a5eb-c2ff-4037-8378-004f49ca33cc","Type":"ContainerStarted","Data":"9803d4ddb1aff5de85be4b750c881fc7926aa6809f9538b7ebe971237efa4066"} Jan 21 15:15:17 crc kubenswrapper[4834]: I0121 15:15:17.855109 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:15:19 crc kubenswrapper[4834]: I0121 15:15:19.873063 4834 generic.go:334] "Generic (PLEG): container finished" podID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerID="0689550f294a0df07b4c58b76fa28fa2df7fd01b834dc6c82b01908d20856373" exitCode=0 Jan 21 15:15:19 crc kubenswrapper[4834]: I0121 15:15:19.873196 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89qzd" event={"ID":"03e5a5eb-c2ff-4037-8378-004f49ca33cc","Type":"ContainerDied","Data":"0689550f294a0df07b4c58b76fa28fa2df7fd01b834dc6c82b01908d20856373"} Jan 21 15:15:20 crc kubenswrapper[4834]: I0121 15:15:20.885246 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89qzd" event={"ID":"03e5a5eb-c2ff-4037-8378-004f49ca33cc","Type":"ContainerStarted","Data":"b0f94c8da7063ab5774a923ed39ba6dde75d98d40734899321f2738f8109a3f6"} Jan 21 15:15:20 crc kubenswrapper[4834]: I0121 15:15:20.911710 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-89qzd" podStartSLOduration=2.477858966 podStartE2EDuration="4.911677845s" podCreationTimestamp="2026-01-21 15:15:16 +0000 UTC" firstStartedPulling="2026-01-21 15:15:17.854816792 +0000 UTC m=+2663.829165837" lastFinishedPulling="2026-01-21 15:15:20.288635671 +0000 UTC m=+2666.262984716" observedRunningTime="2026-01-21 15:15:20.904688678 +0000 UTC m=+2666.879037743" watchObservedRunningTime="2026-01-21 15:15:20.911677845 +0000 UTC m=+2666.886026890" Jan 21 15:15:26 crc kubenswrapper[4834]: I0121 15:15:26.960260 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:26 crc kubenswrapper[4834]: I0121 15:15:26.961141 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:27 crc kubenswrapper[4834]: I0121 15:15:27.017466 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:27 crc kubenswrapper[4834]: I0121 15:15:27.999359 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:28 crc kubenswrapper[4834]: I0121 15:15:28.059720 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-89qzd"] Jan 21 15:15:29 crc kubenswrapper[4834]: I0121 15:15:29.966762 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-89qzd" podUID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerName="registry-server" containerID="cri-o://b0f94c8da7063ab5774a923ed39ba6dde75d98d40734899321f2738f8109a3f6" gracePeriod=2 Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.002307 4834 generic.go:334] "Generic (PLEG): container finished" podID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerID="b0f94c8da7063ab5774a923ed39ba6dde75d98d40734899321f2738f8109a3f6" exitCode=0 Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.002398 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89qzd" event={"ID":"03e5a5eb-c2ff-4037-8378-004f49ca33cc","Type":"ContainerDied","Data":"b0f94c8da7063ab5774a923ed39ba6dde75d98d40734899321f2738f8109a3f6"} Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.056478 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.114555 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-catalog-content\") pod \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.114666 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk8n5\" (UniqueName: \"kubernetes.io/projected/03e5a5eb-c2ff-4037-8378-004f49ca33cc-kube-api-access-wk8n5\") pod \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.114754 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-utilities\") pod \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\" (UID: \"03e5a5eb-c2ff-4037-8378-004f49ca33cc\") " Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.116520 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-utilities" (OuterVolumeSpecName: "utilities") pod "03e5a5eb-c2ff-4037-8378-004f49ca33cc" (UID: "03e5a5eb-c2ff-4037-8378-004f49ca33cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.123340 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e5a5eb-c2ff-4037-8378-004f49ca33cc-kube-api-access-wk8n5" (OuterVolumeSpecName: "kube-api-access-wk8n5") pod "03e5a5eb-c2ff-4037-8378-004f49ca33cc" (UID: "03e5a5eb-c2ff-4037-8378-004f49ca33cc"). InnerVolumeSpecName "kube-api-access-wk8n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.217496 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk8n5\" (UniqueName: \"kubernetes.io/projected/03e5a5eb-c2ff-4037-8378-004f49ca33cc-kube-api-access-wk8n5\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.217541 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.259967 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03e5a5eb-c2ff-4037-8378-004f49ca33cc" (UID: "03e5a5eb-c2ff-4037-8378-004f49ca33cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:15:33 crc kubenswrapper[4834]: I0121 15:15:33.318984 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e5a5eb-c2ff-4037-8378-004f49ca33cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:15:34 crc kubenswrapper[4834]: I0121 15:15:34.015187 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89qzd" event={"ID":"03e5a5eb-c2ff-4037-8378-004f49ca33cc","Type":"ContainerDied","Data":"9803d4ddb1aff5de85be4b750c881fc7926aa6809f9538b7ebe971237efa4066"} Jan 21 15:15:34 crc kubenswrapper[4834]: I0121 15:15:34.015828 4834 scope.go:117] "RemoveContainer" containerID="b0f94c8da7063ab5774a923ed39ba6dde75d98d40734899321f2738f8109a3f6" Jan 21 15:15:34 crc kubenswrapper[4834]: I0121 15:15:34.016159 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89qzd" Jan 21 15:15:34 crc kubenswrapper[4834]: I0121 15:15:34.055694 4834 scope.go:117] "RemoveContainer" containerID="0689550f294a0df07b4c58b76fa28fa2df7fd01b834dc6c82b01908d20856373" Jan 21 15:15:34 crc kubenswrapper[4834]: I0121 15:15:34.066976 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-89qzd"] Jan 21 15:15:34 crc kubenswrapper[4834]: I0121 15:15:34.079031 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-89qzd"] Jan 21 15:15:34 crc kubenswrapper[4834]: I0121 15:15:34.082445 4834 scope.go:117] "RemoveContainer" containerID="aed26cb2b934d640921d03a404e3a898d76ae66ed1dfe96dd045252135f87a6d" Jan 21 15:15:34 crc kubenswrapper[4834]: I0121 15:15:34.336358 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" path="/var/lib/kubelet/pods/03e5a5eb-c2ff-4037-8378-004f49ca33cc/volumes" Jan 21 15:15:47 crc kubenswrapper[4834]: I0121 15:15:47.114273 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:15:47 crc kubenswrapper[4834]: I0121 15:15:47.115076 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:16:17 crc kubenswrapper[4834]: I0121 15:16:17.113629 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:16:17 crc kubenswrapper[4834]: I0121 15:16:17.114477 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:16:47 crc kubenswrapper[4834]: I0121 15:16:47.114355 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:16:47 crc kubenswrapper[4834]: I0121 15:16:47.115288 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:16:47 crc kubenswrapper[4834]: I0121 15:16:47.115362 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:16:47 crc kubenswrapper[4834]: I0121 15:16:47.116290 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"801f20bc8a5062f15f647f4378d846ce37f6c31931336fad30ace7f7068b64b4"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:16:47 crc kubenswrapper[4834]: I0121 15:16:47.116366 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://801f20bc8a5062f15f647f4378d846ce37f6c31931336fad30ace7f7068b64b4" gracePeriod=600 Jan 21 15:16:47 crc kubenswrapper[4834]: I0121 15:16:47.622785 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="801f20bc8a5062f15f647f4378d846ce37f6c31931336fad30ace7f7068b64b4" exitCode=0 Jan 21 15:16:47 crc kubenswrapper[4834]: I0121 15:16:47.622880 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"801f20bc8a5062f15f647f4378d846ce37f6c31931336fad30ace7f7068b64b4"} Jan 21 15:16:47 crc kubenswrapper[4834]: I0121 15:16:47.623244 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814"} Jan 21 15:16:47 crc kubenswrapper[4834]: I0121 15:16:47.623290 4834 scope.go:117] "RemoveContainer" containerID="3755e82a60b2d02add6bd4bfc76ff7b5ff7654e4d6a57d8275ca5fcbfa9c827a" Jan 21 15:17:23 crc kubenswrapper[4834]: I0121 15:17:23.891228 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bsx9c"] Jan 21 15:17:23 crc kubenswrapper[4834]: E0121 15:17:23.892624 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerName="registry-server" Jan 21 15:17:23 crc kubenswrapper[4834]: I0121 15:17:23.892650 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerName="registry-server" Jan 21 15:17:23 crc kubenswrapper[4834]: E0121 15:17:23.892668 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerName="extract-utilities" Jan 21 15:17:23 crc kubenswrapper[4834]: I0121 15:17:23.892678 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerName="extract-utilities" Jan 21 15:17:23 crc kubenswrapper[4834]: E0121 15:17:23.892693 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerName="extract-content" Jan 21 15:17:23 crc kubenswrapper[4834]: I0121 15:17:23.892701 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerName="extract-content" Jan 21 15:17:23 crc kubenswrapper[4834]: I0121 15:17:23.892876 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e5a5eb-c2ff-4037-8378-004f49ca33cc" containerName="registry-server" Jan 21 15:17:23 crc kubenswrapper[4834]: I0121 15:17:23.894639 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:23 crc kubenswrapper[4834]: I0121 15:17:23.903662 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsx9c"] Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.002429 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-utilities\") pod \"certified-operators-bsx9c\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.002548 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6k6d\" (UniqueName: \"kubernetes.io/projected/3ea86825-6fa0-490f-a074-0ffec2a47512-kube-api-access-v6k6d\") pod \"certified-operators-bsx9c\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.002623 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-catalog-content\") pod \"certified-operators-bsx9c\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.104921 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-utilities\") pod \"certified-operators-bsx9c\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.105023 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6k6d\" (UniqueName: \"kubernetes.io/projected/3ea86825-6fa0-490f-a074-0ffec2a47512-kube-api-access-v6k6d\") pod \"certified-operators-bsx9c\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.105097 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-catalog-content\") pod \"certified-operators-bsx9c\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.105676 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-catalog-content\") pod \"certified-operators-bsx9c\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.105676 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-utilities\") pod \"certified-operators-bsx9c\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.129473 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6k6d\" (UniqueName: \"kubernetes.io/projected/3ea86825-6fa0-490f-a074-0ffec2a47512-kube-api-access-v6k6d\") pod \"certified-operators-bsx9c\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.230674 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.521467 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsx9c"] Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.942456 4834 generic.go:334] "Generic (PLEG): container finished" podID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerID="3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9" exitCode=0 Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.942524 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsx9c" event={"ID":"3ea86825-6fa0-490f-a074-0ffec2a47512","Type":"ContainerDied","Data":"3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9"} Jan 21 15:17:24 crc kubenswrapper[4834]: I0121 15:17:24.942565 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsx9c" event={"ID":"3ea86825-6fa0-490f-a074-0ffec2a47512","Type":"ContainerStarted","Data":"a620ce977d42d5fe85bd0f02fcf7968ed995c59f8a65f74b7140920ff0fe5924"} Jan 21 15:17:26 crc kubenswrapper[4834]: I0121 15:17:26.959457 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsx9c" event={"ID":"3ea86825-6fa0-490f-a074-0ffec2a47512","Type":"ContainerStarted","Data":"86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6"} Jan 21 15:17:27 crc kubenswrapper[4834]: I0121 15:17:27.971195 4834 generic.go:334] "Generic (PLEG): container finished" podID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerID="86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6" exitCode=0 Jan 21 15:17:27 crc kubenswrapper[4834]: I0121 15:17:27.971263 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsx9c" event={"ID":"3ea86825-6fa0-490f-a074-0ffec2a47512","Type":"ContainerDied","Data":"86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6"} Jan 21 15:17:28 crc kubenswrapper[4834]: I0121 15:17:28.982249 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsx9c" event={"ID":"3ea86825-6fa0-490f-a074-0ffec2a47512","Type":"ContainerStarted","Data":"c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0"} Jan 21 15:17:29 crc kubenswrapper[4834]: I0121 15:17:29.009522 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bsx9c" podStartSLOduration=2.397744351 podStartE2EDuration="6.009496639s" podCreationTimestamp="2026-01-21 15:17:23 +0000 UTC" firstStartedPulling="2026-01-21 15:17:24.944499477 +0000 UTC m=+2790.918848522" lastFinishedPulling="2026-01-21 15:17:28.556251765 +0000 UTC m=+2794.530600810" observedRunningTime="2026-01-21 15:17:29.002527932 +0000 UTC m=+2794.976876977" watchObservedRunningTime="2026-01-21 15:17:29.009496639 +0000 UTC m=+2794.983845684" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.274765 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j79vm"] Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.283119 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.297795 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j79vm"] Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.423892 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b906389-a061-4191-a46e-b0950bacd6ff-utilities\") pod \"community-operators-j79vm\" (UID: \"8b906389-a061-4191-a46e-b0950bacd6ff\") " pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.424110 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b906389-a061-4191-a46e-b0950bacd6ff-catalog-content\") pod \"community-operators-j79vm\" (UID: \"8b906389-a061-4191-a46e-b0950bacd6ff\") " pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.424154 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gr99\" (UniqueName: \"kubernetes.io/projected/8b906389-a061-4191-a46e-b0950bacd6ff-kube-api-access-4gr99\") pod \"community-operators-j79vm\" (UID: \"8b906389-a061-4191-a46e-b0950bacd6ff\") " pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.526559 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b906389-a061-4191-a46e-b0950bacd6ff-utilities\") pod \"community-operators-j79vm\" (UID: \"8b906389-a061-4191-a46e-b0950bacd6ff\") " pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.526613 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b906389-a061-4191-a46e-b0950bacd6ff-catalog-content\") pod \"community-operators-j79vm\" (UID: \"8b906389-a061-4191-a46e-b0950bacd6ff\") " pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.526640 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gr99\" (UniqueName: \"kubernetes.io/projected/8b906389-a061-4191-a46e-b0950bacd6ff-kube-api-access-4gr99\") pod \"community-operators-j79vm\" (UID: \"8b906389-a061-4191-a46e-b0950bacd6ff\") " pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.527343 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b906389-a061-4191-a46e-b0950bacd6ff-utilities\") pod \"community-operators-j79vm\" (UID: \"8b906389-a061-4191-a46e-b0950bacd6ff\") " pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.527378 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b906389-a061-4191-a46e-b0950bacd6ff-catalog-content\") pod \"community-operators-j79vm\" (UID: \"8b906389-a061-4191-a46e-b0950bacd6ff\") " pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.550309 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gr99\" (UniqueName: \"kubernetes.io/projected/8b906389-a061-4191-a46e-b0950bacd6ff-kube-api-access-4gr99\") pod \"community-operators-j79vm\" (UID: \"8b906389-a061-4191-a46e-b0950bacd6ff\") " pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:30 crc kubenswrapper[4834]: I0121 15:17:30.612696 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:31 crc kubenswrapper[4834]: I0121 15:17:31.147762 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j79vm"] Jan 21 15:17:32 crc kubenswrapper[4834]: I0121 15:17:32.010367 4834 generic.go:334] "Generic (PLEG): container finished" podID="8b906389-a061-4191-a46e-b0950bacd6ff" containerID="6fef478eec607b90b65f2bd2ce9cf8b2f631889cec07202c8e6ac803ae390ef7" exitCode=0 Jan 21 15:17:32 crc kubenswrapper[4834]: I0121 15:17:32.010495 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j79vm" event={"ID":"8b906389-a061-4191-a46e-b0950bacd6ff","Type":"ContainerDied","Data":"6fef478eec607b90b65f2bd2ce9cf8b2f631889cec07202c8e6ac803ae390ef7"} Jan 21 15:17:32 crc kubenswrapper[4834]: I0121 15:17:32.010820 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j79vm" event={"ID":"8b906389-a061-4191-a46e-b0950bacd6ff","Type":"ContainerStarted","Data":"17b44ce14eda4040f6e7d025279d12ede2a0accdc2acf69d84e912cdaf0e27a3"} Jan 21 15:17:34 crc kubenswrapper[4834]: I0121 15:17:34.232262 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:34 crc kubenswrapper[4834]: I0121 15:17:34.232702 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:34 crc kubenswrapper[4834]: I0121 15:17:34.297396 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:35 crc kubenswrapper[4834]: I0121 15:17:35.081259 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:35 crc kubenswrapper[4834]: I0121 15:17:35.461082 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsx9c"] Jan 21 15:17:36 crc kubenswrapper[4834]: I0121 15:17:36.050282 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j79vm" event={"ID":"8b906389-a061-4191-a46e-b0950bacd6ff","Type":"ContainerStarted","Data":"aadcdc0bac0f9ea752e2c2c485f7d743ad15b70caa944a5b2d43ca9a5b4d7c24"} Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.061286 4834 generic.go:334] "Generic (PLEG): container finished" podID="8b906389-a061-4191-a46e-b0950bacd6ff" containerID="aadcdc0bac0f9ea752e2c2c485f7d743ad15b70caa944a5b2d43ca9a5b4d7c24" exitCode=0 Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.061350 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j79vm" event={"ID":"8b906389-a061-4191-a46e-b0950bacd6ff","Type":"ContainerDied","Data":"aadcdc0bac0f9ea752e2c2c485f7d743ad15b70caa944a5b2d43ca9a5b4d7c24"} Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.061570 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bsx9c" podUID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerName="registry-server" containerID="cri-o://c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0" gracePeriod=2 Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.873847 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.980670 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6k6d\" (UniqueName: \"kubernetes.io/projected/3ea86825-6fa0-490f-a074-0ffec2a47512-kube-api-access-v6k6d\") pod \"3ea86825-6fa0-490f-a074-0ffec2a47512\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.980907 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-catalog-content\") pod \"3ea86825-6fa0-490f-a074-0ffec2a47512\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.981030 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-utilities\") pod \"3ea86825-6fa0-490f-a074-0ffec2a47512\" (UID: \"3ea86825-6fa0-490f-a074-0ffec2a47512\") " Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.982577 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-utilities" (OuterVolumeSpecName: "utilities") pod "3ea86825-6fa0-490f-a074-0ffec2a47512" (UID: "3ea86825-6fa0-490f-a074-0ffec2a47512"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.983014 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:17:37 crc kubenswrapper[4834]: I0121 15:17:37.989418 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea86825-6fa0-490f-a074-0ffec2a47512-kube-api-access-v6k6d" (OuterVolumeSpecName: "kube-api-access-v6k6d") pod "3ea86825-6fa0-490f-a074-0ffec2a47512" (UID: "3ea86825-6fa0-490f-a074-0ffec2a47512"). InnerVolumeSpecName "kube-api-access-v6k6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.041248 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ea86825-6fa0-490f-a074-0ffec2a47512" (UID: "3ea86825-6fa0-490f-a074-0ffec2a47512"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.074635 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j79vm" event={"ID":"8b906389-a061-4191-a46e-b0950bacd6ff","Type":"ContainerStarted","Data":"d03a1eb7fa10932637dcaec7e5155a9d43c5b6c01f3ffbf98890f6a34388ccb9"} Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.079157 4834 generic.go:334] "Generic (PLEG): container finished" podID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerID="c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0" exitCode=0 Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.079221 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsx9c" event={"ID":"3ea86825-6fa0-490f-a074-0ffec2a47512","Type":"ContainerDied","Data":"c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0"} Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.079292 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsx9c" event={"ID":"3ea86825-6fa0-490f-a074-0ffec2a47512","Type":"ContainerDied","Data":"a620ce977d42d5fe85bd0f02fcf7968ed995c59f8a65f74b7140920ff0fe5924"} Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.079322 4834 scope.go:117] "RemoveContainer" containerID="c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.079482 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsx9c" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.084155 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea86825-6fa0-490f-a074-0ffec2a47512-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.084198 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6k6d\" (UniqueName: \"kubernetes.io/projected/3ea86825-6fa0-490f-a074-0ffec2a47512-kube-api-access-v6k6d\") on node \"crc\" DevicePath \"\"" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.105011 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j79vm" podStartSLOduration=2.477379091 podStartE2EDuration="8.104982531s" podCreationTimestamp="2026-01-21 15:17:30 +0000 UTC" firstStartedPulling="2026-01-21 15:17:32.012250266 +0000 UTC m=+2797.986599311" lastFinishedPulling="2026-01-21 15:17:37.639853686 +0000 UTC m=+2803.614202751" observedRunningTime="2026-01-21 15:17:38.097442036 +0000 UTC m=+2804.071791101" watchObservedRunningTime="2026-01-21 15:17:38.104982531 +0000 UTC m=+2804.079331576" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.115163 4834 scope.go:117] "RemoveContainer" containerID="86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.122788 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsx9c"] Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.128803 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bsx9c"] Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.152487 4834 scope.go:117] "RemoveContainer" containerID="3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.175391 4834 scope.go:117] "RemoveContainer" containerID="c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0" Jan 21 15:17:38 crc kubenswrapper[4834]: E0121 15:17:38.176975 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0\": container with ID starting with c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0 not found: ID does not exist" containerID="c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.177036 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0"} err="failed to get container status \"c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0\": rpc error: code = NotFound desc = could not find container \"c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0\": container with ID starting with c9147dbc02589a92b961370d5860fcc5e0c0260303202179aef5bf6553249af0 not found: ID does not exist" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.177076 4834 scope.go:117] "RemoveContainer" containerID="86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6" Jan 21 15:17:38 crc kubenswrapper[4834]: E0121 15:17:38.177682 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6\": container with ID starting with 86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6 not found: ID does not exist" containerID="86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.177724 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6"} err="failed to get container status \"86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6\": rpc error: code = NotFound desc = could not find container \"86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6\": container with ID starting with 86f2e2d6c66c57776ade4e62cb1819369b160ec11a0d12f1fa33060554df85c6 not found: ID does not exist" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.177758 4834 scope.go:117] "RemoveContainer" containerID="3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9" Jan 21 15:17:38 crc kubenswrapper[4834]: E0121 15:17:38.178177 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9\": container with ID starting with 3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9 not found: ID does not exist" containerID="3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.178234 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9"} err="failed to get container status \"3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9\": rpc error: code = NotFound desc = could not find container \"3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9\": container with ID starting with 3fcd35c9a0e7445caf3798b9ce986216dfad0e5bcd4819b060e5a6eafb8405c9 not found: ID does not exist" Jan 21 15:17:38 crc kubenswrapper[4834]: I0121 15:17:38.335897 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea86825-6fa0-490f-a074-0ffec2a47512" path="/var/lib/kubelet/pods/3ea86825-6fa0-490f-a074-0ffec2a47512/volumes" Jan 21 15:17:40 crc kubenswrapper[4834]: I0121 15:17:40.613897 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:40 crc kubenswrapper[4834]: I0121 15:17:40.614557 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:40 crc kubenswrapper[4834]: I0121 15:17:40.716435 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:50 crc kubenswrapper[4834]: I0121 15:17:50.665995 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j79vm" Jan 21 15:17:50 crc kubenswrapper[4834]: I0121 15:17:50.745474 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j79vm"] Jan 21 15:17:50 crc kubenswrapper[4834]: I0121 15:17:50.965899 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2fjs"] Jan 21 15:17:50 crc kubenswrapper[4834]: I0121 15:17:50.966276 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n2fjs" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" containerName="registry-server" containerID="cri-o://3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef" gracePeriod=2 Jan 21 15:17:51 crc kubenswrapper[4834]: I0121 15:17:51.973740 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2fjs" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.154777 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-utilities\") pod \"0c18088d-a345-4848-a9d4-407441f5bb99\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.154901 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-catalog-content\") pod \"0c18088d-a345-4848-a9d4-407441f5bb99\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.155034 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8xng\" (UniqueName: \"kubernetes.io/projected/0c18088d-a345-4848-a9d4-407441f5bb99-kube-api-access-r8xng\") pod \"0c18088d-a345-4848-a9d4-407441f5bb99\" (UID: \"0c18088d-a345-4848-a9d4-407441f5bb99\") " Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.155847 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-utilities" (OuterVolumeSpecName: "utilities") pod "0c18088d-a345-4848-a9d4-407441f5bb99" (UID: "0c18088d-a345-4848-a9d4-407441f5bb99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.164154 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c18088d-a345-4848-a9d4-407441f5bb99-kube-api-access-r8xng" (OuterVolumeSpecName: "kube-api-access-r8xng") pod "0c18088d-a345-4848-a9d4-407441f5bb99" (UID: "0c18088d-a345-4848-a9d4-407441f5bb99"). InnerVolumeSpecName "kube-api-access-r8xng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.203825 4834 generic.go:334] "Generic (PLEG): container finished" podID="0c18088d-a345-4848-a9d4-407441f5bb99" containerID="3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef" exitCode=0 Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.203918 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n2fjs" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.203912 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2fjs" event={"ID":"0c18088d-a345-4848-a9d4-407441f5bb99","Type":"ContainerDied","Data":"3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef"} Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.204027 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n2fjs" event={"ID":"0c18088d-a345-4848-a9d4-407441f5bb99","Type":"ContainerDied","Data":"379f26697bf9de53971efb08c3383a34934b16a094739e945189c0c2449f6da4"} Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.204056 4834 scope.go:117] "RemoveContainer" containerID="3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.206748 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c18088d-a345-4848-a9d4-407441f5bb99" (UID: "0c18088d-a345-4848-a9d4-407441f5bb99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.235065 4834 scope.go:117] "RemoveContainer" containerID="c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.256894 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.256987 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c18088d-a345-4848-a9d4-407441f5bb99-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.257000 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8xng\" (UniqueName: \"kubernetes.io/projected/0c18088d-a345-4848-a9d4-407441f5bb99-kube-api-access-r8xng\") on node \"crc\" DevicePath \"\"" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.259184 4834 scope.go:117] "RemoveContainer" containerID="11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.280711 4834 scope.go:117] "RemoveContainer" containerID="3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef" Jan 21 15:17:52 crc kubenswrapper[4834]: E0121 15:17:52.281430 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef\": container with ID starting with 3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef not found: ID does not exist" containerID="3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.281480 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef"} err="failed to get container status \"3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef\": rpc error: code = NotFound desc = could not find container \"3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef\": container with ID starting with 3827969dd950803fcd9fd24b8770ea2071e8d493f0e148a1c4f721beeb4727ef not found: ID does not exist" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.281513 4834 scope.go:117] "RemoveContainer" containerID="c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548" Jan 21 15:17:52 crc kubenswrapper[4834]: E0121 15:17:52.281754 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548\": container with ID starting with c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548 not found: ID does not exist" containerID="c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.281789 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548"} err="failed to get container status \"c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548\": rpc error: code = NotFound desc = could not find container \"c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548\": container with ID starting with c2772c5dd7dfd11a4fe0242b71906343bd79cf66559f3fab657c3560535cb548 not found: ID does not exist" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.281808 4834 scope.go:117] "RemoveContainer" containerID="11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250" Jan 21 15:17:52 crc kubenswrapper[4834]: E0121 15:17:52.282113 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250\": container with ID starting with 11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250 not found: ID does not exist" containerID="11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.282139 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250"} err="failed to get container status \"11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250\": rpc error: code = NotFound desc = could not find container \"11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250\": container with ID starting with 11dd55cac38821def9898e679ea048d76c9c401a4b449849077c3f00d094b250 not found: ID does not exist" Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.570560 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n2fjs"] Jan 21 15:17:52 crc kubenswrapper[4834]: I0121 15:17:52.599054 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n2fjs"] Jan 21 15:17:54 crc kubenswrapper[4834]: I0121 15:17:54.337475 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" path="/var/lib/kubelet/pods/0c18088d-a345-4848-a9d4-407441f5bb99/volumes" Jan 21 15:18:47 crc kubenswrapper[4834]: I0121 15:18:47.113737 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:18:47 crc kubenswrapper[4834]: I0121 15:18:47.114363 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:19:17 crc kubenswrapper[4834]: I0121 15:19:17.114555 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:19:17 crc kubenswrapper[4834]: I0121 15:19:17.115221 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:19:47 crc kubenswrapper[4834]: I0121 15:19:47.114078 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:19:47 crc kubenswrapper[4834]: I0121 15:19:47.114787 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:19:47 crc kubenswrapper[4834]: I0121 15:19:47.114845 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:19:47 crc kubenswrapper[4834]: I0121 15:19:47.115979 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:19:47 crc kubenswrapper[4834]: I0121 15:19:47.116107 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" gracePeriod=600 Jan 21 15:19:47 crc kubenswrapper[4834]: E0121 15:19:47.236587 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:19:47 crc kubenswrapper[4834]: I0121 15:19:47.513838 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" exitCode=0 Jan 21 15:19:47 crc kubenswrapper[4834]: I0121 15:19:47.513897 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814"} Jan 21 15:19:47 crc kubenswrapper[4834]: I0121 15:19:47.513984 4834 scope.go:117] "RemoveContainer" containerID="801f20bc8a5062f15f647f4378d846ce37f6c31931336fad30ace7f7068b64b4" Jan 21 15:19:47 crc kubenswrapper[4834]: I0121 15:19:47.514573 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:19:47 crc kubenswrapper[4834]: E0121 15:19:47.514971 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:19:59 crc kubenswrapper[4834]: E0121 15:19:59.544626 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 21 15:20:01 crc kubenswrapper[4834]: I0121 15:20:01.324672 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:20:01 crc kubenswrapper[4834]: E0121 15:20:01.325276 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:20:14 crc kubenswrapper[4834]: I0121 15:20:14.329364 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:20:14 crc kubenswrapper[4834]: E0121 15:20:14.330565 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:20:29 crc kubenswrapper[4834]: I0121 15:20:29.325286 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:20:29 crc kubenswrapper[4834]: E0121 15:20:29.326568 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:20:41 crc kubenswrapper[4834]: I0121 15:20:41.325256 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:20:41 crc kubenswrapper[4834]: E0121 15:20:41.326040 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:20:54 crc kubenswrapper[4834]: I0121 15:20:54.328847 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:20:54 crc kubenswrapper[4834]: E0121 15:20:54.329720 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:21:07 crc kubenswrapper[4834]: I0121 15:21:07.324748 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:21:07 crc kubenswrapper[4834]: E0121 15:21:07.326031 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:21:19 crc kubenswrapper[4834]: I0121 15:21:19.324642 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:21:19 crc kubenswrapper[4834]: E0121 15:21:19.325962 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:21:30 crc kubenswrapper[4834]: I0121 15:21:30.326004 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:21:30 crc kubenswrapper[4834]: E0121 15:21:30.326951 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:21:45 crc kubenswrapper[4834]: I0121 15:21:45.325822 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:21:45 crc kubenswrapper[4834]: E0121 15:21:45.327077 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:22:00 crc kubenswrapper[4834]: I0121 15:22:00.324776 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:22:00 crc kubenswrapper[4834]: E0121 15:22:00.326026 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:22:12 crc kubenswrapper[4834]: I0121 15:22:12.325069 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:22:12 crc kubenswrapper[4834]: E0121 15:22:12.326382 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:22:24 crc kubenswrapper[4834]: I0121 15:22:24.328470 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:22:24 crc kubenswrapper[4834]: E0121 15:22:24.329281 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.348316 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhk4"] Jan 21 15:22:28 crc kubenswrapper[4834]: E0121 15:22:28.349024 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" containerName="extract-utilities" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.349040 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" containerName="extract-utilities" Jan 21 15:22:28 crc kubenswrapper[4834]: E0121 15:22:28.349061 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" containerName="extract-content" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.349067 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" containerName="extract-content" Jan 21 15:22:28 crc kubenswrapper[4834]: E0121 15:22:28.349078 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerName="extract-content" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.349084 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerName="extract-content" Jan 21 15:22:28 crc kubenswrapper[4834]: E0121 15:22:28.349113 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" containerName="registry-server" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.349120 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" containerName="registry-server" Jan 21 15:22:28 crc kubenswrapper[4834]: E0121 15:22:28.349136 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerName="registry-server" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.349144 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerName="registry-server" Jan 21 15:22:28 crc kubenswrapper[4834]: E0121 15:22:28.349160 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerName="extract-utilities" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.349166 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerName="extract-utilities" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.349322 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea86825-6fa0-490f-a074-0ffec2a47512" containerName="registry-server" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.349349 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c18088d-a345-4848-a9d4-407441f5bb99" containerName="registry-server" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.350749 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.368677 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhk4"] Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.529565 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj4nt\" (UniqueName: \"kubernetes.io/projected/2345f7d7-b342-40be-81b2-ed9435cca8b7-kube-api-access-wj4nt\") pod \"redhat-marketplace-5dhk4\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.529672 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-utilities\") pod \"redhat-marketplace-5dhk4\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.529710 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-catalog-content\") pod \"redhat-marketplace-5dhk4\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.631031 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-utilities\") pod \"redhat-marketplace-5dhk4\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.631104 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-catalog-content\") pod \"redhat-marketplace-5dhk4\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.631207 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj4nt\" (UniqueName: \"kubernetes.io/projected/2345f7d7-b342-40be-81b2-ed9435cca8b7-kube-api-access-wj4nt\") pod \"redhat-marketplace-5dhk4\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.631846 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-utilities\") pod \"redhat-marketplace-5dhk4\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.631910 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-catalog-content\") pod \"redhat-marketplace-5dhk4\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.672283 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj4nt\" (UniqueName: \"kubernetes.io/projected/2345f7d7-b342-40be-81b2-ed9435cca8b7-kube-api-access-wj4nt\") pod \"redhat-marketplace-5dhk4\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:28 crc kubenswrapper[4834]: I0121 15:22:28.676344 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:29 crc kubenswrapper[4834]: I0121 15:22:29.271142 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhk4"] Jan 21 15:22:29 crc kubenswrapper[4834]: I0121 15:22:29.941759 4834 generic.go:334] "Generic (PLEG): container finished" podID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerID="7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4" exitCode=0 Jan 21 15:22:29 crc kubenswrapper[4834]: I0121 15:22:29.941841 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhk4" event={"ID":"2345f7d7-b342-40be-81b2-ed9435cca8b7","Type":"ContainerDied","Data":"7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4"} Jan 21 15:22:29 crc kubenswrapper[4834]: I0121 15:22:29.941895 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhk4" event={"ID":"2345f7d7-b342-40be-81b2-ed9435cca8b7","Type":"ContainerStarted","Data":"3dd6411edf96e7a7da381339e6e98310826167e2278cc80ce0bd1e614a9d803d"} Jan 21 15:22:29 crc kubenswrapper[4834]: I0121 15:22:29.944863 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:22:30 crc kubenswrapper[4834]: I0121 15:22:30.954214 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhk4" event={"ID":"2345f7d7-b342-40be-81b2-ed9435cca8b7","Type":"ContainerStarted","Data":"50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597"} Jan 21 15:22:31 crc kubenswrapper[4834]: I0121 15:22:31.970583 4834 generic.go:334] "Generic (PLEG): container finished" podID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerID="50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597" exitCode=0 Jan 21 15:22:31 crc kubenswrapper[4834]: I0121 15:22:31.970661 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhk4" event={"ID":"2345f7d7-b342-40be-81b2-ed9435cca8b7","Type":"ContainerDied","Data":"50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597"} Jan 21 15:22:32 crc kubenswrapper[4834]: I0121 15:22:32.987522 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhk4" event={"ID":"2345f7d7-b342-40be-81b2-ed9435cca8b7","Type":"ContainerStarted","Data":"128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8"} Jan 21 15:22:33 crc kubenswrapper[4834]: I0121 15:22:33.012139 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5dhk4" podStartSLOduration=2.583526105 podStartE2EDuration="5.012089437s" podCreationTimestamp="2026-01-21 15:22:28 +0000 UTC" firstStartedPulling="2026-01-21 15:22:29.944458977 +0000 UTC m=+3095.918808022" lastFinishedPulling="2026-01-21 15:22:32.373022309 +0000 UTC m=+3098.347371354" observedRunningTime="2026-01-21 15:22:33.008084953 +0000 UTC m=+3098.982434008" watchObservedRunningTime="2026-01-21 15:22:33.012089437 +0000 UTC m=+3098.986438482" Jan 21 15:22:38 crc kubenswrapper[4834]: I0121 15:22:38.677505 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:38 crc kubenswrapper[4834]: I0121 15:22:38.677983 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:38 crc kubenswrapper[4834]: I0121 15:22:38.723182 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:39 crc kubenswrapper[4834]: I0121 15:22:39.082958 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:39 crc kubenswrapper[4834]: I0121 15:22:39.145664 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhk4"] Jan 21 15:22:39 crc kubenswrapper[4834]: I0121 15:22:39.325466 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:22:39 crc kubenswrapper[4834]: E0121 15:22:39.325874 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:22:41 crc kubenswrapper[4834]: I0121 15:22:41.047029 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5dhk4" podUID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerName="registry-server" containerID="cri-o://128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8" gracePeriod=2 Jan 21 15:22:41 crc kubenswrapper[4834]: I0121 15:22:41.979658 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.059380 4834 generic.go:334] "Generic (PLEG): container finished" podID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerID="128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8" exitCode=0 Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.059438 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhk4" event={"ID":"2345f7d7-b342-40be-81b2-ed9435cca8b7","Type":"ContainerDied","Data":"128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8"} Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.059493 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dhk4" event={"ID":"2345f7d7-b342-40be-81b2-ed9435cca8b7","Type":"ContainerDied","Data":"3dd6411edf96e7a7da381339e6e98310826167e2278cc80ce0bd1e614a9d803d"} Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.059518 4834 scope.go:117] "RemoveContainer" containerID="128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.059715 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dhk4" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.095506 4834 scope.go:117] "RemoveContainer" containerID="50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.121152 4834 scope.go:117] "RemoveContainer" containerID="7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.147542 4834 scope.go:117] "RemoveContainer" containerID="128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8" Jan 21 15:22:42 crc kubenswrapper[4834]: E0121 15:22:42.148141 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8\": container with ID starting with 128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8 not found: ID does not exist" containerID="128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.148187 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8"} err="failed to get container status \"128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8\": rpc error: code = NotFound desc = could not find container \"128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8\": container with ID starting with 128ae040560c5297d05796fc78ec9cef805691f7c4baa0a57fba34f9c563bae8 not found: ID does not exist" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.148216 4834 scope.go:117] "RemoveContainer" containerID="50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597" Jan 21 15:22:42 crc kubenswrapper[4834]: E0121 15:22:42.148526 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597\": container with ID starting with 50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597 not found: ID does not exist" containerID="50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.148558 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597"} err="failed to get container status \"50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597\": rpc error: code = NotFound desc = could not find container \"50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597\": container with ID starting with 50832f8a8f9aab21c16d4b3c82020567ad98ff17cb5b9c148c280a3ecac9f597 not found: ID does not exist" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.148583 4834 scope.go:117] "RemoveContainer" containerID="7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4" Jan 21 15:22:42 crc kubenswrapper[4834]: E0121 15:22:42.148808 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4\": container with ID starting with 7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4 not found: ID does not exist" containerID="7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.148832 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4"} err="failed to get container status \"7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4\": rpc error: code = NotFound desc = could not find container \"7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4\": container with ID starting with 7c138d1f351ab25e0741a4d0c7cc86e86adb0b8656a07075bb8d103c8b7424f4 not found: ID does not exist" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.158506 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-utilities\") pod \"2345f7d7-b342-40be-81b2-ed9435cca8b7\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.158583 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4nt\" (UniqueName: \"kubernetes.io/projected/2345f7d7-b342-40be-81b2-ed9435cca8b7-kube-api-access-wj4nt\") pod \"2345f7d7-b342-40be-81b2-ed9435cca8b7\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.158722 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-catalog-content\") pod \"2345f7d7-b342-40be-81b2-ed9435cca8b7\" (UID: \"2345f7d7-b342-40be-81b2-ed9435cca8b7\") " Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.160030 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-utilities" (OuterVolumeSpecName: "utilities") pod "2345f7d7-b342-40be-81b2-ed9435cca8b7" (UID: "2345f7d7-b342-40be-81b2-ed9435cca8b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.166647 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2345f7d7-b342-40be-81b2-ed9435cca8b7-kube-api-access-wj4nt" (OuterVolumeSpecName: "kube-api-access-wj4nt") pod "2345f7d7-b342-40be-81b2-ed9435cca8b7" (UID: "2345f7d7-b342-40be-81b2-ed9435cca8b7"). InnerVolumeSpecName "kube-api-access-wj4nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.188544 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2345f7d7-b342-40be-81b2-ed9435cca8b7" (UID: "2345f7d7-b342-40be-81b2-ed9435cca8b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.260323 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.260373 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2345f7d7-b342-40be-81b2-ed9435cca8b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.260391 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj4nt\" (UniqueName: \"kubernetes.io/projected/2345f7d7-b342-40be-81b2-ed9435cca8b7-kube-api-access-wj4nt\") on node \"crc\" DevicePath \"\"" Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.391645 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhk4"] Jan 21 15:22:42 crc kubenswrapper[4834]: I0121 15:22:42.400325 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dhk4"] Jan 21 15:22:44 crc kubenswrapper[4834]: I0121 15:22:44.334407 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2345f7d7-b342-40be-81b2-ed9435cca8b7" path="/var/lib/kubelet/pods/2345f7d7-b342-40be-81b2-ed9435cca8b7/volumes" Jan 21 15:22:51 crc kubenswrapper[4834]: I0121 15:22:51.324950 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:22:51 crc kubenswrapper[4834]: E0121 15:22:51.325488 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:23:05 crc kubenswrapper[4834]: I0121 15:23:05.325070 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:23:05 crc kubenswrapper[4834]: E0121 15:23:05.326063 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:23:19 crc kubenswrapper[4834]: I0121 15:23:19.325587 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:23:19 crc kubenswrapper[4834]: E0121 15:23:19.327013 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:23:32 crc kubenswrapper[4834]: I0121 15:23:32.325356 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:23:32 crc kubenswrapper[4834]: E0121 15:23:32.326338 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:23:46 crc kubenswrapper[4834]: I0121 15:23:46.324773 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:23:46 crc kubenswrapper[4834]: E0121 15:23:46.325578 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:24:00 crc kubenswrapper[4834]: I0121 15:24:00.325869 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:24:00 crc kubenswrapper[4834]: E0121 15:24:00.327172 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:24:11 crc kubenswrapper[4834]: I0121 15:24:11.324714 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:24:11 crc kubenswrapper[4834]: E0121 15:24:11.325489 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:24:25 crc kubenswrapper[4834]: I0121 15:24:25.325362 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:24:25 crc kubenswrapper[4834]: E0121 15:24:25.326069 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:24:36 crc kubenswrapper[4834]: I0121 15:24:36.325814 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:24:36 crc kubenswrapper[4834]: E0121 15:24:36.327104 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:24:51 crc kubenswrapper[4834]: I0121 15:24:51.325277 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:24:52 crc kubenswrapper[4834]: I0121 15:24:52.085456 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"d30d547c3b4d5cb308b12f1e0b83e287a1154379737b78e8c6ae446c26fdd37c"} Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.352711 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mj7zq"] Jan 21 15:26:46 crc kubenswrapper[4834]: E0121 15:26:46.354014 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerName="extract-utilities" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.354032 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerName="extract-utilities" Jan 21 15:26:46 crc kubenswrapper[4834]: E0121 15:26:46.354075 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerName="registry-server" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.354081 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerName="registry-server" Jan 21 15:26:46 crc kubenswrapper[4834]: E0121 15:26:46.354100 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerName="extract-content" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.354106 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerName="extract-content" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.354318 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2345f7d7-b342-40be-81b2-ed9435cca8b7" containerName="registry-server" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.355666 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.370761 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mj7zq"] Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.538399 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-catalog-content\") pod \"redhat-operators-mj7zq\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.538446 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-utilities\") pod \"redhat-operators-mj7zq\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.538519 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxkr\" (UniqueName: \"kubernetes.io/projected/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-kube-api-access-qhxkr\") pod \"redhat-operators-mj7zq\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.640669 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-catalog-content\") pod \"redhat-operators-mj7zq\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.640730 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-utilities\") pod \"redhat-operators-mj7zq\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.640827 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhxkr\" (UniqueName: \"kubernetes.io/projected/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-kube-api-access-qhxkr\") pod \"redhat-operators-mj7zq\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.641276 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-catalog-content\") pod \"redhat-operators-mj7zq\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.641366 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-utilities\") pod \"redhat-operators-mj7zq\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.660074 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhxkr\" (UniqueName: \"kubernetes.io/projected/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-kube-api-access-qhxkr\") pod \"redhat-operators-mj7zq\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:46 crc kubenswrapper[4834]: I0121 15:26:46.690429 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:47 crc kubenswrapper[4834]: I0121 15:26:47.222871 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mj7zq"] Jan 21 15:26:48 crc kubenswrapper[4834]: I0121 15:26:48.034958 4834 generic.go:334] "Generic (PLEG): container finished" podID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerID="d71009b2a08e6a9880627cbd9c82085c8434533d5ea1125b153a460506ec214d" exitCode=0 Jan 21 15:26:48 crc kubenswrapper[4834]: I0121 15:26:48.035045 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mj7zq" event={"ID":"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60","Type":"ContainerDied","Data":"d71009b2a08e6a9880627cbd9c82085c8434533d5ea1125b153a460506ec214d"} Jan 21 15:26:48 crc kubenswrapper[4834]: I0121 15:26:48.035333 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mj7zq" event={"ID":"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60","Type":"ContainerStarted","Data":"ed7ddd971afb65cf94b72eda8253bebc53a6b4cb443fb47a6e197fdf7d2c51ac"} Jan 21 15:26:50 crc kubenswrapper[4834]: I0121 15:26:50.056366 4834 generic.go:334] "Generic (PLEG): container finished" podID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerID="7f5da409c3d0843b4129ba6a8aa38f688d6f2eaccc5d0755ea792fbb09585133" exitCode=0 Jan 21 15:26:50 crc kubenswrapper[4834]: I0121 15:26:50.056438 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mj7zq" event={"ID":"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60","Type":"ContainerDied","Data":"7f5da409c3d0843b4129ba6a8aa38f688d6f2eaccc5d0755ea792fbb09585133"} Jan 21 15:26:51 crc kubenswrapper[4834]: I0121 15:26:51.065579 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mj7zq" event={"ID":"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60","Type":"ContainerStarted","Data":"ae46c469b1ca48f7c0f52a207b427874364c22b7f0b988540cb646539512c428"} Jan 21 15:26:51 crc kubenswrapper[4834]: I0121 15:26:51.084252 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mj7zq" podStartSLOduration=2.65237065 podStartE2EDuration="5.084226994s" podCreationTimestamp="2026-01-21 15:26:46 +0000 UTC" firstStartedPulling="2026-01-21 15:26:48.037317833 +0000 UTC m=+3354.011666878" lastFinishedPulling="2026-01-21 15:26:50.469174177 +0000 UTC m=+3356.443523222" observedRunningTime="2026-01-21 15:26:51.08024542 +0000 UTC m=+3357.054594465" watchObservedRunningTime="2026-01-21 15:26:51.084226994 +0000 UTC m=+3357.058576039" Jan 21 15:26:56 crc kubenswrapper[4834]: I0121 15:26:56.691721 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:56 crc kubenswrapper[4834]: I0121 15:26:56.692365 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:56 crc kubenswrapper[4834]: I0121 15:26:56.735058 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:57 crc kubenswrapper[4834]: I0121 15:26:57.153558 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:26:57 crc kubenswrapper[4834]: I0121 15:26:57.208002 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mj7zq"] Jan 21 15:26:59 crc kubenswrapper[4834]: I0121 15:26:59.133622 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mj7zq" podUID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerName="registry-server" containerID="cri-o://ae46c469b1ca48f7c0f52a207b427874364c22b7f0b988540cb646539512c428" gracePeriod=2 Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.160144 4834 generic.go:334] "Generic (PLEG): container finished" podID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerID="ae46c469b1ca48f7c0f52a207b427874364c22b7f0b988540cb646539512c428" exitCode=0 Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.160244 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mj7zq" event={"ID":"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60","Type":"ContainerDied","Data":"ae46c469b1ca48f7c0f52a207b427874364c22b7f0b988540cb646539512c428"} Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.725000 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.832684 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-utilities\") pod \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.832755 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhxkr\" (UniqueName: \"kubernetes.io/projected/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-kube-api-access-qhxkr\") pod \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.832823 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-catalog-content\") pod \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\" (UID: \"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60\") " Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.834032 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-utilities" (OuterVolumeSpecName: "utilities") pod "fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" (UID: "fddd83cd-fa7c-45f4-a6ec-dda4fa73de60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.841830 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-kube-api-access-qhxkr" (OuterVolumeSpecName: "kube-api-access-qhxkr") pod "fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" (UID: "fddd83cd-fa7c-45f4-a6ec-dda4fa73de60"). InnerVolumeSpecName "kube-api-access-qhxkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.937834 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:27:03 crc kubenswrapper[4834]: I0121 15:27:03.937882 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhxkr\" (UniqueName: \"kubernetes.io/projected/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-kube-api-access-qhxkr\") on node \"crc\" DevicePath \"\"" Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.008184 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" (UID: "fddd83cd-fa7c-45f4-a6ec-dda4fa73de60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.039030 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.171592 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mj7zq" event={"ID":"fddd83cd-fa7c-45f4-a6ec-dda4fa73de60","Type":"ContainerDied","Data":"ed7ddd971afb65cf94b72eda8253bebc53a6b4cb443fb47a6e197fdf7d2c51ac"} Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.171665 4834 scope.go:117] "RemoveContainer" containerID="ae46c469b1ca48f7c0f52a207b427874364c22b7f0b988540cb646539512c428" Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.171686 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mj7zq" Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.192339 4834 scope.go:117] "RemoveContainer" containerID="7f5da409c3d0843b4129ba6a8aa38f688d6f2eaccc5d0755ea792fbb09585133" Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.202962 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mj7zq"] Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.208446 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mj7zq"] Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.222773 4834 scope.go:117] "RemoveContainer" containerID="d71009b2a08e6a9880627cbd9c82085c8434533d5ea1125b153a460506ec214d" Jan 21 15:27:04 crc kubenswrapper[4834]: I0121 15:27:04.334491 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" path="/var/lib/kubelet/pods/fddd83cd-fa7c-45f4-a6ec-dda4fa73de60/volumes" Jan 21 15:27:17 crc kubenswrapper[4834]: I0121 15:27:17.114513 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:27:17 crc kubenswrapper[4834]: I0121 15:27:17.115341 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:27:47 crc kubenswrapper[4834]: I0121 15:27:47.114622 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:27:47 crc kubenswrapper[4834]: I0121 15:27:47.115640 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.288549 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9zgs7"] Jan 21 15:28:03 crc kubenswrapper[4834]: E0121 15:28:03.289961 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerName="extract-content" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.289981 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerName="extract-content" Jan 21 15:28:03 crc kubenswrapper[4834]: E0121 15:28:03.290009 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerName="extract-utilities" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.290017 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerName="extract-utilities" Jan 21 15:28:03 crc kubenswrapper[4834]: E0121 15:28:03.290034 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerName="registry-server" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.290040 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerName="registry-server" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.290252 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="fddd83cd-fa7c-45f4-a6ec-dda4fa73de60" containerName="registry-server" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.291719 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.315064 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9zgs7"] Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.372877 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a591efb-81ec-493d-bf9c-40c1dc4cac3d-utilities\") pod \"certified-operators-9zgs7\" (UID: \"5a591efb-81ec-493d-bf9c-40c1dc4cac3d\") " pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.373610 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kzfd\" (UniqueName: \"kubernetes.io/projected/5a591efb-81ec-493d-bf9c-40c1dc4cac3d-kube-api-access-5kzfd\") pod \"certified-operators-9zgs7\" (UID: \"5a591efb-81ec-493d-bf9c-40c1dc4cac3d\") " pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.373674 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a591efb-81ec-493d-bf9c-40c1dc4cac3d-catalog-content\") pod \"certified-operators-9zgs7\" (UID: \"5a591efb-81ec-493d-bf9c-40c1dc4cac3d\") " pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.475664 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a591efb-81ec-493d-bf9c-40c1dc4cac3d-utilities\") pod \"certified-operators-9zgs7\" (UID: \"5a591efb-81ec-493d-bf9c-40c1dc4cac3d\") " pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.475769 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kzfd\" (UniqueName: \"kubernetes.io/projected/5a591efb-81ec-493d-bf9c-40c1dc4cac3d-kube-api-access-5kzfd\") pod \"certified-operators-9zgs7\" (UID: \"5a591efb-81ec-493d-bf9c-40c1dc4cac3d\") " pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.475831 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a591efb-81ec-493d-bf9c-40c1dc4cac3d-catalog-content\") pod \"certified-operators-9zgs7\" (UID: \"5a591efb-81ec-493d-bf9c-40c1dc4cac3d\") " pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.477435 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a591efb-81ec-493d-bf9c-40c1dc4cac3d-utilities\") pod \"certified-operators-9zgs7\" (UID: \"5a591efb-81ec-493d-bf9c-40c1dc4cac3d\") " pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.477589 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a591efb-81ec-493d-bf9c-40c1dc4cac3d-catalog-content\") pod \"certified-operators-9zgs7\" (UID: \"5a591efb-81ec-493d-bf9c-40c1dc4cac3d\") " pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.502247 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kzfd\" (UniqueName: \"kubernetes.io/projected/5a591efb-81ec-493d-bf9c-40c1dc4cac3d-kube-api-access-5kzfd\") pod \"certified-operators-9zgs7\" (UID: \"5a591efb-81ec-493d-bf9c-40c1dc4cac3d\") " pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:03 crc kubenswrapper[4834]: I0121 15:28:03.620610 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:04 crc kubenswrapper[4834]: I0121 15:28:04.150086 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9zgs7"] Jan 21 15:28:04 crc kubenswrapper[4834]: I0121 15:28:04.635844 4834 generic.go:334] "Generic (PLEG): container finished" podID="5a591efb-81ec-493d-bf9c-40c1dc4cac3d" containerID="75ea2e4c8baf88e75c211d19b2c57db2ff831bfe40ff997d32d661db77320daf" exitCode=0 Jan 21 15:28:04 crc kubenswrapper[4834]: I0121 15:28:04.635983 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zgs7" event={"ID":"5a591efb-81ec-493d-bf9c-40c1dc4cac3d","Type":"ContainerDied","Data":"75ea2e4c8baf88e75c211d19b2c57db2ff831bfe40ff997d32d661db77320daf"} Jan 21 15:28:04 crc kubenswrapper[4834]: I0121 15:28:04.636326 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zgs7" event={"ID":"5a591efb-81ec-493d-bf9c-40c1dc4cac3d","Type":"ContainerStarted","Data":"38d7b6523662974b6945775645be7b53b6013f6bec6a8b7a37ccf71d86646525"} Jan 21 15:28:04 crc kubenswrapper[4834]: I0121 15:28:04.640675 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:28:09 crc kubenswrapper[4834]: E0121 15:28:09.197704 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a591efb_81ec_493d_bf9c_40c1dc4cac3d.slice/crio-conmon-f42af2749c9a5cc3b6200bed04ae853bfe84bf330d50d2259047bbcf2aeaf32f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 15:28:09 crc kubenswrapper[4834]: I0121 15:28:09.688215 4834 generic.go:334] "Generic (PLEG): container finished" podID="5a591efb-81ec-493d-bf9c-40c1dc4cac3d" containerID="f42af2749c9a5cc3b6200bed04ae853bfe84bf330d50d2259047bbcf2aeaf32f" exitCode=0 Jan 21 15:28:09 crc kubenswrapper[4834]: I0121 15:28:09.688298 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zgs7" event={"ID":"5a591efb-81ec-493d-bf9c-40c1dc4cac3d","Type":"ContainerDied","Data":"f42af2749c9a5cc3b6200bed04ae853bfe84bf330d50d2259047bbcf2aeaf32f"} Jan 21 15:28:10 crc kubenswrapper[4834]: I0121 15:28:10.697821 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zgs7" event={"ID":"5a591efb-81ec-493d-bf9c-40c1dc4cac3d","Type":"ContainerStarted","Data":"22adad87d4da3066e9f85786a7ea47f196cfd436d74e57da2cd33ac97e6a442e"} Jan 21 15:28:10 crc kubenswrapper[4834]: I0121 15:28:10.724143 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9zgs7" podStartSLOduration=2.094593251 podStartE2EDuration="7.724118243s" podCreationTimestamp="2026-01-21 15:28:03 +0000 UTC" firstStartedPulling="2026-01-21 15:28:04.640322341 +0000 UTC m=+3430.614671386" lastFinishedPulling="2026-01-21 15:28:10.269847323 +0000 UTC m=+3436.244196378" observedRunningTime="2026-01-21 15:28:10.720195571 +0000 UTC m=+3436.694544616" watchObservedRunningTime="2026-01-21 15:28:10.724118243 +0000 UTC m=+3436.698467288" Jan 21 15:28:13 crc kubenswrapper[4834]: I0121 15:28:13.621652 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:13 crc kubenswrapper[4834]: I0121 15:28:13.622478 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:13 crc kubenswrapper[4834]: I0121 15:28:13.666321 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:17 crc kubenswrapper[4834]: I0121 15:28:17.114495 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:28:17 crc kubenswrapper[4834]: I0121 15:28:17.115670 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:28:17 crc kubenswrapper[4834]: I0121 15:28:17.115787 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:28:17 crc kubenswrapper[4834]: I0121 15:28:17.116479 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d30d547c3b4d5cb308b12f1e0b83e287a1154379737b78e8c6ae446c26fdd37c"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:28:17 crc kubenswrapper[4834]: I0121 15:28:17.116603 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://d30d547c3b4d5cb308b12f1e0b83e287a1154379737b78e8c6ae446c26fdd37c" gracePeriod=600 Jan 21 15:28:17 crc kubenswrapper[4834]: I0121 15:28:17.776363 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="d30d547c3b4d5cb308b12f1e0b83e287a1154379737b78e8c6ae446c26fdd37c" exitCode=0 Jan 21 15:28:17 crc kubenswrapper[4834]: I0121 15:28:17.776436 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"d30d547c3b4d5cb308b12f1e0b83e287a1154379737b78e8c6ae446c26fdd37c"} Jan 21 15:28:17 crc kubenswrapper[4834]: I0121 15:28:17.776507 4834 scope.go:117] "RemoveContainer" containerID="1c051d63c5c2d50904163da91cd95a9f6236e544649656ed05a482ff59d4c814" Jan 21 15:28:18 crc kubenswrapper[4834]: I0121 15:28:18.787091 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3"} Jan 21 15:28:23 crc kubenswrapper[4834]: I0121 15:28:23.667211 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9zgs7" Jan 21 15:28:23 crc kubenswrapper[4834]: I0121 15:28:23.728805 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9zgs7"] Jan 21 15:28:23 crc kubenswrapper[4834]: I0121 15:28:23.776944 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5wr7"] Jan 21 15:28:23 crc kubenswrapper[4834]: I0121 15:28:23.777450 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v5wr7" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="registry-server" containerID="cri-o://9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4" gracePeriod=2 Jan 21 15:28:25 crc kubenswrapper[4834]: E0121 15:28:25.817350 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4 is running failed: container process not found" containerID="9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:28:25 crc kubenswrapper[4834]: E0121 15:28:25.817800 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4 is running failed: container process not found" containerID="9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:28:25 crc kubenswrapper[4834]: E0121 15:28:25.818441 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4 is running failed: container process not found" containerID="9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:28:25 crc kubenswrapper[4834]: E0121 15:28:25.818478 4834 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-v5wr7" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="registry-server" Jan 21 15:28:25 crc kubenswrapper[4834]: I0121 15:28:25.840998 4834 generic.go:334] "Generic (PLEG): container finished" podID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerID="9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4" exitCode=0 Jan 21 15:28:25 crc kubenswrapper[4834]: I0121 15:28:25.841125 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5wr7" event={"ID":"c79e4b7f-364f-40b0-87df-f09267c34d78","Type":"ContainerDied","Data":"9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4"} Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.516503 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.675522 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-utilities\") pod \"c79e4b7f-364f-40b0-87df-f09267c34d78\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.675600 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-catalog-content\") pod \"c79e4b7f-364f-40b0-87df-f09267c34d78\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.675650 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz7p7\" (UniqueName: \"kubernetes.io/projected/c79e4b7f-364f-40b0-87df-f09267c34d78-kube-api-access-fz7p7\") pod \"c79e4b7f-364f-40b0-87df-f09267c34d78\" (UID: \"c79e4b7f-364f-40b0-87df-f09267c34d78\") " Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.677228 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-utilities" (OuterVolumeSpecName: "utilities") pod "c79e4b7f-364f-40b0-87df-f09267c34d78" (UID: "c79e4b7f-364f-40b0-87df-f09267c34d78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.681893 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79e4b7f-364f-40b0-87df-f09267c34d78-kube-api-access-fz7p7" (OuterVolumeSpecName: "kube-api-access-fz7p7") pod "c79e4b7f-364f-40b0-87df-f09267c34d78" (UID: "c79e4b7f-364f-40b0-87df-f09267c34d78"). InnerVolumeSpecName "kube-api-access-fz7p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.738019 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c79e4b7f-364f-40b0-87df-f09267c34d78" (UID: "c79e4b7f-364f-40b0-87df-f09267c34d78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.776886 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.777024 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c79e4b7f-364f-40b0-87df-f09267c34d78-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.777045 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz7p7\" (UniqueName: \"kubernetes.io/projected/c79e4b7f-364f-40b0-87df-f09267c34d78-kube-api-access-fz7p7\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.850210 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5wr7" event={"ID":"c79e4b7f-364f-40b0-87df-f09267c34d78","Type":"ContainerDied","Data":"af2304090a196070751ce180681c82fe7e9aeb885746e03f45c680d6dc7a4e21"} Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.850265 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5wr7" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.850274 4834 scope.go:117] "RemoveContainer" containerID="9358a39956ed48e8cd267dce6636964d21f6844eb6041b3651a8dcacfb87e2f4" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.887777 4834 scope.go:117] "RemoveContainer" containerID="93c2b8dd965c003dccb3176561bb12b96ba3702e33b1cf6ddd98e41cb68fea33" Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.890889 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5wr7"] Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.898749 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v5wr7"] Jan 21 15:28:26 crc kubenswrapper[4834]: I0121 15:28:26.919957 4834 scope.go:117] "RemoveContainer" containerID="f5c745ddbe13fd7983e943e1e01bba92e9f55d6aca604e82b6d01355fc7ee618" Jan 21 15:28:28 crc kubenswrapper[4834]: I0121 15:28:28.333166 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" path="/var/lib/kubelet/pods/c79e4b7f-364f-40b0-87df-f09267c34d78/volumes" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.291165 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gpjvw"] Jan 21 15:28:42 crc kubenswrapper[4834]: E0121 15:28:42.292076 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="extract-content" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.292088 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="extract-content" Jan 21 15:28:42 crc kubenswrapper[4834]: E0121 15:28:42.292111 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="registry-server" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.292118 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="registry-server" Jan 21 15:28:42 crc kubenswrapper[4834]: E0121 15:28:42.292129 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="extract-utilities" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.292135 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="extract-utilities" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.292258 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79e4b7f-364f-40b0-87df-f09267c34d78" containerName="registry-server" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.293651 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.305894 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpjvw"] Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.412115 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9845s\" (UniqueName: \"kubernetes.io/projected/d5f26110-70cc-4a1b-8de8-6357fffa01a2-kube-api-access-9845s\") pod \"community-operators-gpjvw\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.412299 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-utilities\") pod \"community-operators-gpjvw\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.412437 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-catalog-content\") pod \"community-operators-gpjvw\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.513800 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-utilities\") pod \"community-operators-gpjvw\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.513891 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-catalog-content\") pod \"community-operators-gpjvw\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.513982 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9845s\" (UniqueName: \"kubernetes.io/projected/d5f26110-70cc-4a1b-8de8-6357fffa01a2-kube-api-access-9845s\") pod \"community-operators-gpjvw\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.514512 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-catalog-content\") pod \"community-operators-gpjvw\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.514512 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-utilities\") pod \"community-operators-gpjvw\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.539019 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9845s\" (UniqueName: \"kubernetes.io/projected/d5f26110-70cc-4a1b-8de8-6357fffa01a2-kube-api-access-9845s\") pod \"community-operators-gpjvw\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.615229 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:42 crc kubenswrapper[4834]: I0121 15:28:42.983652 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gpjvw"] Jan 21 15:28:43 crc kubenswrapper[4834]: I0121 15:28:43.070944 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpjvw" event={"ID":"d5f26110-70cc-4a1b-8de8-6357fffa01a2","Type":"ContainerStarted","Data":"9d9b2a2d54229df873ed29cb007f1bd4bcbc86ff240f4bf6b61aa4104f3cd133"} Jan 21 15:28:44 crc kubenswrapper[4834]: I0121 15:28:44.081533 4834 generic.go:334] "Generic (PLEG): container finished" podID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerID="c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de" exitCode=0 Jan 21 15:28:44 crc kubenswrapper[4834]: I0121 15:28:44.081717 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpjvw" event={"ID":"d5f26110-70cc-4a1b-8de8-6357fffa01a2","Type":"ContainerDied","Data":"c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de"} Jan 21 15:28:45 crc kubenswrapper[4834]: I0121 15:28:45.090737 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpjvw" event={"ID":"d5f26110-70cc-4a1b-8de8-6357fffa01a2","Type":"ContainerStarted","Data":"2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f"} Jan 21 15:28:46 crc kubenswrapper[4834]: I0121 15:28:46.099879 4834 generic.go:334] "Generic (PLEG): container finished" podID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerID="2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f" exitCode=0 Jan 21 15:28:46 crc kubenswrapper[4834]: I0121 15:28:46.099956 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpjvw" event={"ID":"d5f26110-70cc-4a1b-8de8-6357fffa01a2","Type":"ContainerDied","Data":"2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f"} Jan 21 15:28:47 crc kubenswrapper[4834]: I0121 15:28:47.111128 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpjvw" event={"ID":"d5f26110-70cc-4a1b-8de8-6357fffa01a2","Type":"ContainerStarted","Data":"4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0"} Jan 21 15:28:47 crc kubenswrapper[4834]: I0121 15:28:47.136115 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gpjvw" podStartSLOduration=2.391934553 podStartE2EDuration="5.136091735s" podCreationTimestamp="2026-01-21 15:28:42 +0000 UTC" firstStartedPulling="2026-01-21 15:28:44.084622951 +0000 UTC m=+3470.058971996" lastFinishedPulling="2026-01-21 15:28:46.828780133 +0000 UTC m=+3472.803129178" observedRunningTime="2026-01-21 15:28:47.128098846 +0000 UTC m=+3473.102447911" watchObservedRunningTime="2026-01-21 15:28:47.136091735 +0000 UTC m=+3473.110440780" Jan 21 15:28:52 crc kubenswrapper[4834]: I0121 15:28:52.616512 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:52 crc kubenswrapper[4834]: I0121 15:28:52.617778 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:52 crc kubenswrapper[4834]: I0121 15:28:52.680808 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:53 crc kubenswrapper[4834]: I0121 15:28:53.201476 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:53 crc kubenswrapper[4834]: I0121 15:28:53.258058 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gpjvw"] Jan 21 15:28:55 crc kubenswrapper[4834]: I0121 15:28:55.166243 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gpjvw" podUID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerName="registry-server" containerID="cri-o://4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0" gracePeriod=2 Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.165608 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.177395 4834 generic.go:334] "Generic (PLEG): container finished" podID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerID="4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0" exitCode=0 Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.177437 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpjvw" event={"ID":"d5f26110-70cc-4a1b-8de8-6357fffa01a2","Type":"ContainerDied","Data":"4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0"} Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.177465 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gpjvw" event={"ID":"d5f26110-70cc-4a1b-8de8-6357fffa01a2","Type":"ContainerDied","Data":"9d9b2a2d54229df873ed29cb007f1bd4bcbc86ff240f4bf6b61aa4104f3cd133"} Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.177481 4834 scope.go:117] "RemoveContainer" containerID="4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.177480 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gpjvw" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.204250 4834 scope.go:117] "RemoveContainer" containerID="2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.221598 4834 scope.go:117] "RemoveContainer" containerID="c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.226660 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-catalog-content\") pod \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.226715 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-utilities\") pod \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.226836 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9845s\" (UniqueName: \"kubernetes.io/projected/d5f26110-70cc-4a1b-8de8-6357fffa01a2-kube-api-access-9845s\") pod \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\" (UID: \"d5f26110-70cc-4a1b-8de8-6357fffa01a2\") " Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.229286 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-utilities" (OuterVolumeSpecName: "utilities") pod "d5f26110-70cc-4a1b-8de8-6357fffa01a2" (UID: "d5f26110-70cc-4a1b-8de8-6357fffa01a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.232521 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f26110-70cc-4a1b-8de8-6357fffa01a2-kube-api-access-9845s" (OuterVolumeSpecName: "kube-api-access-9845s") pod "d5f26110-70cc-4a1b-8de8-6357fffa01a2" (UID: "d5f26110-70cc-4a1b-8de8-6357fffa01a2"). InnerVolumeSpecName "kube-api-access-9845s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.267489 4834 scope.go:117] "RemoveContainer" containerID="4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0" Jan 21 15:28:56 crc kubenswrapper[4834]: E0121 15:28:56.268046 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0\": container with ID starting with 4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0 not found: ID does not exist" containerID="4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.268087 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0"} err="failed to get container status \"4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0\": rpc error: code = NotFound desc = could not find container \"4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0\": container with ID starting with 4e3d136d2ea51ab66f622e334da4d638bbfd7ea71165e3fcfe6466ab6b2e94a0 not found: ID does not exist" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.268109 4834 scope.go:117] "RemoveContainer" containerID="2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f" Jan 21 15:28:56 crc kubenswrapper[4834]: E0121 15:28:56.268442 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f\": container with ID starting with 2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f not found: ID does not exist" containerID="2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.268494 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f"} err="failed to get container status \"2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f\": rpc error: code = NotFound desc = could not find container \"2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f\": container with ID starting with 2a6efe437eb6034ea106cecc5acab7ccbc856e9993ac1d500a7e07cd3179a04f not found: ID does not exist" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.268525 4834 scope.go:117] "RemoveContainer" containerID="c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de" Jan 21 15:28:56 crc kubenswrapper[4834]: E0121 15:28:56.268749 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de\": container with ID starting with c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de not found: ID does not exist" containerID="c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.268777 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de"} err="failed to get container status \"c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de\": rpc error: code = NotFound desc = could not find container \"c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de\": container with ID starting with c068b9ba2aed5506a3653139e83dec152db24004e51dcca7688f3fde4a8e98de not found: ID does not exist" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.290801 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5f26110-70cc-4a1b-8de8-6357fffa01a2" (UID: "d5f26110-70cc-4a1b-8de8-6357fffa01a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.328149 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.328190 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f26110-70cc-4a1b-8de8-6357fffa01a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.328203 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9845s\" (UniqueName: \"kubernetes.io/projected/d5f26110-70cc-4a1b-8de8-6357fffa01a2-kube-api-access-9845s\") on node \"crc\" DevicePath \"\"" Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.498544 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gpjvw"] Jan 21 15:28:56 crc kubenswrapper[4834]: I0121 15:28:56.507390 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gpjvw"] Jan 21 15:28:58 crc kubenswrapper[4834]: I0121 15:28:58.333951 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" path="/var/lib/kubelet/pods/d5f26110-70cc-4a1b-8de8-6357fffa01a2/volumes" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.164354 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn"] Jan 21 15:30:00 crc kubenswrapper[4834]: E0121 15:30:00.165917 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerName="registry-server" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.165965 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerName="registry-server" Jan 21 15:30:00 crc kubenswrapper[4834]: E0121 15:30:00.165983 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerName="extract-content" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.165990 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerName="extract-content" Jan 21 15:30:00 crc kubenswrapper[4834]: E0121 15:30:00.166004 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerName="extract-utilities" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.166012 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerName="extract-utilities" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.166215 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f26110-70cc-4a1b-8de8-6357fffa01a2" containerName="registry-server" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.167087 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.170189 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.170243 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.175853 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn"] Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.214188 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc347f98-a93c-4fd4-9aeb-1bcc58992509-config-volume\") pod \"collect-profiles-29483490-8b9nn\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.214247 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwlr\" (UniqueName: \"kubernetes.io/projected/cc347f98-a93c-4fd4-9aeb-1bcc58992509-kube-api-access-hxwlr\") pod \"collect-profiles-29483490-8b9nn\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.214286 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc347f98-a93c-4fd4-9aeb-1bcc58992509-secret-volume\") pod \"collect-profiles-29483490-8b9nn\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.315758 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc347f98-a93c-4fd4-9aeb-1bcc58992509-config-volume\") pod \"collect-profiles-29483490-8b9nn\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.315807 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwlr\" (UniqueName: \"kubernetes.io/projected/cc347f98-a93c-4fd4-9aeb-1bcc58992509-kube-api-access-hxwlr\") pod \"collect-profiles-29483490-8b9nn\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.315828 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc347f98-a93c-4fd4-9aeb-1bcc58992509-secret-volume\") pod \"collect-profiles-29483490-8b9nn\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.317455 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc347f98-a93c-4fd4-9aeb-1bcc58992509-config-volume\") pod \"collect-profiles-29483490-8b9nn\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.326205 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc347f98-a93c-4fd4-9aeb-1bcc58992509-secret-volume\") pod \"collect-profiles-29483490-8b9nn\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.338569 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwlr\" (UniqueName: \"kubernetes.io/projected/cc347f98-a93c-4fd4-9aeb-1bcc58992509-kube-api-access-hxwlr\") pod \"collect-profiles-29483490-8b9nn\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.490402 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:00 crc kubenswrapper[4834]: I0121 15:30:00.936445 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn"] Jan 21 15:30:01 crc kubenswrapper[4834]: I0121 15:30:01.647504 4834 generic.go:334] "Generic (PLEG): container finished" podID="cc347f98-a93c-4fd4-9aeb-1bcc58992509" containerID="3721e6df56f79c815569579c3843194163e62b497302c8c695d733c1a7382835" exitCode=0 Jan 21 15:30:01 crc kubenswrapper[4834]: I0121 15:30:01.647691 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" event={"ID":"cc347f98-a93c-4fd4-9aeb-1bcc58992509","Type":"ContainerDied","Data":"3721e6df56f79c815569579c3843194163e62b497302c8c695d733c1a7382835"} Jan 21 15:30:01 crc kubenswrapper[4834]: I0121 15:30:01.649136 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" event={"ID":"cc347f98-a93c-4fd4-9aeb-1bcc58992509","Type":"ContainerStarted","Data":"a2159df66f0775aec2ca9fec7fc609f4b2fdb768147cc5124466f4f1a76c16f8"} Jan 21 15:30:02 crc kubenswrapper[4834]: I0121 15:30:02.933426 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.055226 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc347f98-a93c-4fd4-9aeb-1bcc58992509-secret-volume\") pod \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.055412 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxwlr\" (UniqueName: \"kubernetes.io/projected/cc347f98-a93c-4fd4-9aeb-1bcc58992509-kube-api-access-hxwlr\") pod \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.055459 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc347f98-a93c-4fd4-9aeb-1bcc58992509-config-volume\") pod \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\" (UID: \"cc347f98-a93c-4fd4-9aeb-1bcc58992509\") " Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.056239 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc347f98-a93c-4fd4-9aeb-1bcc58992509-config-volume" (OuterVolumeSpecName: "config-volume") pod "cc347f98-a93c-4fd4-9aeb-1bcc58992509" (UID: "cc347f98-a93c-4fd4-9aeb-1bcc58992509"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.061227 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc347f98-a93c-4fd4-9aeb-1bcc58992509-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cc347f98-a93c-4fd4-9aeb-1bcc58992509" (UID: "cc347f98-a93c-4fd4-9aeb-1bcc58992509"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.062285 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc347f98-a93c-4fd4-9aeb-1bcc58992509-kube-api-access-hxwlr" (OuterVolumeSpecName: "kube-api-access-hxwlr") pod "cc347f98-a93c-4fd4-9aeb-1bcc58992509" (UID: "cc347f98-a93c-4fd4-9aeb-1bcc58992509"). InnerVolumeSpecName "kube-api-access-hxwlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.157256 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc347f98-a93c-4fd4-9aeb-1bcc58992509-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.157294 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxwlr\" (UniqueName: \"kubernetes.io/projected/cc347f98-a93c-4fd4-9aeb-1bcc58992509-kube-api-access-hxwlr\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.157306 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc347f98-a93c-4fd4-9aeb-1bcc58992509-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.667156 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" event={"ID":"cc347f98-a93c-4fd4-9aeb-1bcc58992509","Type":"ContainerDied","Data":"a2159df66f0775aec2ca9fec7fc609f4b2fdb768147cc5124466f4f1a76c16f8"} Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.667560 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2159df66f0775aec2ca9fec7fc609f4b2fdb768147cc5124466f4f1a76c16f8" Jan 21 15:30:03 crc kubenswrapper[4834]: I0121 15:30:03.667211 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn" Jan 21 15:30:04 crc kubenswrapper[4834]: I0121 15:30:04.001036 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx"] Jan 21 15:30:04 crc kubenswrapper[4834]: I0121 15:30:04.007589 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483445-t8llx"] Jan 21 15:30:04 crc kubenswrapper[4834]: I0121 15:30:04.339768 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdca5cec-a6bb-41bb-9270-2f6885e774db" path="/var/lib/kubelet/pods/cdca5cec-a6bb-41bb-9270-2f6885e774db/volumes" Jan 21 15:30:06 crc kubenswrapper[4834]: I0121 15:30:06.522531 4834 scope.go:117] "RemoveContainer" containerID="475734973abf007eb3f20554d5c66cba5477774afb370779cfe7ea0a432aab21" Jan 21 15:30:17 crc kubenswrapper[4834]: I0121 15:30:17.114130 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:30:17 crc kubenswrapper[4834]: I0121 15:30:17.115019 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:30:47 crc kubenswrapper[4834]: I0121 15:30:47.114132 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:30:47 crc kubenswrapper[4834]: I0121 15:30:47.114783 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:31:17 crc kubenswrapper[4834]: I0121 15:31:17.113546 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:31:17 crc kubenswrapper[4834]: I0121 15:31:17.114142 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:31:17 crc kubenswrapper[4834]: I0121 15:31:17.114188 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:31:17 crc kubenswrapper[4834]: I0121 15:31:17.114874 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:31:17 crc kubenswrapper[4834]: I0121 15:31:17.114963 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" gracePeriod=600 Jan 21 15:31:17 crc kubenswrapper[4834]: E0121 15:31:17.240870 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:31:18 crc kubenswrapper[4834]: I0121 15:31:18.226514 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" exitCode=0 Jan 21 15:31:18 crc kubenswrapper[4834]: I0121 15:31:18.226562 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3"} Jan 21 15:31:18 crc kubenswrapper[4834]: I0121 15:31:18.226897 4834 scope.go:117] "RemoveContainer" containerID="d30d547c3b4d5cb308b12f1e0b83e287a1154379737b78e8c6ae446c26fdd37c" Jan 21 15:31:18 crc kubenswrapper[4834]: I0121 15:31:18.227456 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:31:18 crc kubenswrapper[4834]: E0121 15:31:18.227683 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:31:32 crc kubenswrapper[4834]: I0121 15:31:32.325584 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:31:32 crc kubenswrapper[4834]: E0121 15:31:32.326335 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:31:43 crc kubenswrapper[4834]: I0121 15:31:43.324877 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:31:43 crc kubenswrapper[4834]: E0121 15:31:43.326061 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:31:55 crc kubenswrapper[4834]: I0121 15:31:55.324514 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:31:55 crc kubenswrapper[4834]: E0121 15:31:55.325387 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:32:09 crc kubenswrapper[4834]: I0121 15:32:09.325742 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:32:09 crc kubenswrapper[4834]: E0121 15:32:09.326656 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:32:20 crc kubenswrapper[4834]: I0121 15:32:20.324830 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:32:20 crc kubenswrapper[4834]: E0121 15:32:20.325712 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:32:32 crc kubenswrapper[4834]: I0121 15:32:32.385590 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:32:32 crc kubenswrapper[4834]: E0121 15:32:32.386462 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:32:46 crc kubenswrapper[4834]: I0121 15:32:46.325140 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:32:46 crc kubenswrapper[4834]: E0121 15:32:46.325960 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:32:59 crc kubenswrapper[4834]: I0121 15:32:59.324758 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:32:59 crc kubenswrapper[4834]: E0121 15:32:59.325413 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:33:13 crc kubenswrapper[4834]: I0121 15:33:13.324576 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:33:13 crc kubenswrapper[4834]: E0121 15:33:13.325468 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:33:24 crc kubenswrapper[4834]: I0121 15:33:24.331181 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:33:24 crc kubenswrapper[4834]: E0121 15:33:24.333508 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:33:35 crc kubenswrapper[4834]: I0121 15:33:35.324174 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:33:35 crc kubenswrapper[4834]: E0121 15:33:35.324852 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:33:50 crc kubenswrapper[4834]: I0121 15:33:50.324502 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:33:50 crc kubenswrapper[4834]: E0121 15:33:50.326262 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:34:01 crc kubenswrapper[4834]: I0121 15:34:01.324289 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:34:01 crc kubenswrapper[4834]: E0121 15:34:01.325119 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:34:15 crc kubenswrapper[4834]: I0121 15:34:15.324987 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:34:15 crc kubenswrapper[4834]: E0121 15:34:15.325792 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:34:26 crc kubenswrapper[4834]: I0121 15:34:26.324710 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:34:26 crc kubenswrapper[4834]: E0121 15:34:26.325432 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:34:37 crc kubenswrapper[4834]: I0121 15:34:37.324769 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:34:37 crc kubenswrapper[4834]: E0121 15:34:37.326986 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:34:49 crc kubenswrapper[4834]: I0121 15:34:49.324804 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:34:49 crc kubenswrapper[4834]: E0121 15:34:49.325581 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:35:00 crc kubenswrapper[4834]: I0121 15:35:00.324653 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:35:00 crc kubenswrapper[4834]: E0121 15:35:00.325453 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:35:13 crc kubenswrapper[4834]: I0121 15:35:13.325016 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:35:13 crc kubenswrapper[4834]: E0121 15:35:13.325776 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:35:25 crc kubenswrapper[4834]: I0121 15:35:25.324314 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:35:25 crc kubenswrapper[4834]: E0121 15:35:25.325118 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:35:39 crc kubenswrapper[4834]: I0121 15:35:39.324724 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:35:39 crc kubenswrapper[4834]: E0121 15:35:39.325404 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:35:52 crc kubenswrapper[4834]: I0121 15:35:52.324809 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:35:52 crc kubenswrapper[4834]: E0121 15:35:52.326210 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:36:05 crc kubenswrapper[4834]: I0121 15:36:05.325806 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:36:05 crc kubenswrapper[4834]: E0121 15:36:05.326827 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:36:18 crc kubenswrapper[4834]: I0121 15:36:18.324867 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:36:18 crc kubenswrapper[4834]: I0121 15:36:18.917852 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"32c0fc98c6f4dfe55524b7bb0795b15e9eb8f0524622033e4912a5970bb32049"} Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.231629 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mnz99"] Jan 21 15:36:50 crc kubenswrapper[4834]: E0121 15:36:50.232657 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc347f98-a93c-4fd4-9aeb-1bcc58992509" containerName="collect-profiles" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.232673 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc347f98-a93c-4fd4-9aeb-1bcc58992509" containerName="collect-profiles" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.232873 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc347f98-a93c-4fd4-9aeb-1bcc58992509" containerName="collect-profiles" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.234379 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.246448 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnz99"] Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.302218 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4ptr\" (UniqueName: \"kubernetes.io/projected/109b69ce-9c69-4249-adc3-7eef1f5edaa1-kube-api-access-k4ptr\") pod \"redhat-marketplace-mnz99\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.302287 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-utilities\") pod \"redhat-marketplace-mnz99\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.302384 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-catalog-content\") pod \"redhat-marketplace-mnz99\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.403643 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-catalog-content\") pod \"redhat-marketplace-mnz99\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.403730 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4ptr\" (UniqueName: \"kubernetes.io/projected/109b69ce-9c69-4249-adc3-7eef1f5edaa1-kube-api-access-k4ptr\") pod \"redhat-marketplace-mnz99\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.403765 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-utilities\") pod \"redhat-marketplace-mnz99\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.404369 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-catalog-content\") pod \"redhat-marketplace-mnz99\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.404543 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-utilities\") pod \"redhat-marketplace-mnz99\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.495698 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4ptr\" (UniqueName: \"kubernetes.io/projected/109b69ce-9c69-4249-adc3-7eef1f5edaa1-kube-api-access-k4ptr\") pod \"redhat-marketplace-mnz99\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.585206 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:36:50 crc kubenswrapper[4834]: I0121 15:36:50.914995 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnz99"] Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.161960 4834 generic.go:334] "Generic (PLEG): container finished" podID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerID="1656cc75ec5965329d3f46a5d583475f907df7cdb1af85177a4aa6121fc34223" exitCode=0 Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.162073 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnz99" event={"ID":"109b69ce-9c69-4249-adc3-7eef1f5edaa1","Type":"ContainerDied","Data":"1656cc75ec5965329d3f46a5d583475f907df7cdb1af85177a4aa6121fc34223"} Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.162130 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnz99" event={"ID":"109b69ce-9c69-4249-adc3-7eef1f5edaa1","Type":"ContainerStarted","Data":"4a508e7a21ebc3a062884a6d66bf837ff20f1b773d388217b8ca18da95920290"} Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.164307 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.235508 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-875t5"] Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.237312 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.243730 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-875t5"] Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.317131 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-catalog-content\") pod \"redhat-operators-875t5\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.317489 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-utilities\") pod \"redhat-operators-875t5\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.317627 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cdmr\" (UniqueName: \"kubernetes.io/projected/1ff7d812-e06e-4dbb-ae14-977e025e51e7-kube-api-access-9cdmr\") pod \"redhat-operators-875t5\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.418777 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-utilities\") pod \"redhat-operators-875t5\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.418887 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cdmr\" (UniqueName: \"kubernetes.io/projected/1ff7d812-e06e-4dbb-ae14-977e025e51e7-kube-api-access-9cdmr\") pod \"redhat-operators-875t5\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.419345 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-catalog-content\") pod \"redhat-operators-875t5\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.419523 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-utilities\") pod \"redhat-operators-875t5\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.419813 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-catalog-content\") pod \"redhat-operators-875t5\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.443006 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cdmr\" (UniqueName: \"kubernetes.io/projected/1ff7d812-e06e-4dbb-ae14-977e025e51e7-kube-api-access-9cdmr\") pod \"redhat-operators-875t5\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.553221 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:36:51 crc kubenswrapper[4834]: I0121 15:36:51.994675 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-875t5"] Jan 21 15:36:52 crc kubenswrapper[4834]: W0121 15:36:52.099411 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ff7d812_e06e_4dbb_ae14_977e025e51e7.slice/crio-90d63870b429f2634d673ce6a8e0e068bb963495d2d1b925c785a931b27d33d4 WatchSource:0}: Error finding container 90d63870b429f2634d673ce6a8e0e068bb963495d2d1b925c785a931b27d33d4: Status 404 returned error can't find the container with id 90d63870b429f2634d673ce6a8e0e068bb963495d2d1b925c785a931b27d33d4 Jan 21 15:36:52 crc kubenswrapper[4834]: I0121 15:36:52.170728 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-875t5" event={"ID":"1ff7d812-e06e-4dbb-ae14-977e025e51e7","Type":"ContainerStarted","Data":"90d63870b429f2634d673ce6a8e0e068bb963495d2d1b925c785a931b27d33d4"} Jan 21 15:36:53 crc kubenswrapper[4834]: I0121 15:36:53.185840 4834 generic.go:334] "Generic (PLEG): container finished" podID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerID="daafbb73002800c803e238a012a5d403fbea1d0af52b98cd24af949c620e6f8a" exitCode=0 Jan 21 15:36:53 crc kubenswrapper[4834]: I0121 15:36:53.185951 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnz99" event={"ID":"109b69ce-9c69-4249-adc3-7eef1f5edaa1","Type":"ContainerDied","Data":"daafbb73002800c803e238a012a5d403fbea1d0af52b98cd24af949c620e6f8a"} Jan 21 15:36:53 crc kubenswrapper[4834]: I0121 15:36:53.191505 4834 generic.go:334] "Generic (PLEG): container finished" podID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerID="4669b8f7a982ac6b2ae03a245e3c01de05595b874adb916ea99995e36672c7dd" exitCode=0 Jan 21 15:36:53 crc kubenswrapper[4834]: I0121 15:36:53.191562 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-875t5" event={"ID":"1ff7d812-e06e-4dbb-ae14-977e025e51e7","Type":"ContainerDied","Data":"4669b8f7a982ac6b2ae03a245e3c01de05595b874adb916ea99995e36672c7dd"} Jan 21 15:36:54 crc kubenswrapper[4834]: I0121 15:36:54.204125 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnz99" event={"ID":"109b69ce-9c69-4249-adc3-7eef1f5edaa1","Type":"ContainerStarted","Data":"04b807b961c6d74aa5eb2f88ef1086d8c625cb6828faa2f3e2ebf08ffdfca0c5"} Jan 21 15:36:54 crc kubenswrapper[4834]: I0121 15:36:54.224617 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mnz99" podStartSLOduration=1.7915941869999998 podStartE2EDuration="4.224591933s" podCreationTimestamp="2026-01-21 15:36:50 +0000 UTC" firstStartedPulling="2026-01-21 15:36:51.163902354 +0000 UTC m=+3957.138251399" lastFinishedPulling="2026-01-21 15:36:53.5969001 +0000 UTC m=+3959.571249145" observedRunningTime="2026-01-21 15:36:54.21840523 +0000 UTC m=+3960.192754285" watchObservedRunningTime="2026-01-21 15:36:54.224591933 +0000 UTC m=+3960.198940978" Jan 21 15:36:55 crc kubenswrapper[4834]: I0121 15:36:55.214986 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-875t5" event={"ID":"1ff7d812-e06e-4dbb-ae14-977e025e51e7","Type":"ContainerStarted","Data":"97f0b61e7be9253474347c8691325ac737380f2abdd2dda0cec8c43b5c897d1b"} Jan 21 15:36:56 crc kubenswrapper[4834]: I0121 15:36:56.224617 4834 generic.go:334] "Generic (PLEG): container finished" podID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerID="97f0b61e7be9253474347c8691325ac737380f2abdd2dda0cec8c43b5c897d1b" exitCode=0 Jan 21 15:36:56 crc kubenswrapper[4834]: I0121 15:36:56.224671 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-875t5" event={"ID":"1ff7d812-e06e-4dbb-ae14-977e025e51e7","Type":"ContainerDied","Data":"97f0b61e7be9253474347c8691325ac737380f2abdd2dda0cec8c43b5c897d1b"} Jan 21 15:36:57 crc kubenswrapper[4834]: I0121 15:36:57.233433 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-875t5" event={"ID":"1ff7d812-e06e-4dbb-ae14-977e025e51e7","Type":"ContainerStarted","Data":"52e1337be41101326a81e3692c2acd28cc960fb8a0f53096648c50932c7ef958"} Jan 21 15:36:57 crc kubenswrapper[4834]: I0121 15:36:57.260850 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-875t5" podStartSLOduration=2.723186118 podStartE2EDuration="6.26081966s" podCreationTimestamp="2026-01-21 15:36:51 +0000 UTC" firstStartedPulling="2026-01-21 15:36:53.196888644 +0000 UTC m=+3959.171237689" lastFinishedPulling="2026-01-21 15:36:56.734522186 +0000 UTC m=+3962.708871231" observedRunningTime="2026-01-21 15:36:57.252302404 +0000 UTC m=+3963.226651449" watchObservedRunningTime="2026-01-21 15:36:57.26081966 +0000 UTC m=+3963.235168705" Jan 21 15:37:00 crc kubenswrapper[4834]: I0121 15:37:00.586179 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:37:00 crc kubenswrapper[4834]: I0121 15:37:00.586585 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:37:00 crc kubenswrapper[4834]: I0121 15:37:00.651474 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:37:01 crc kubenswrapper[4834]: I0121 15:37:01.297586 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:37:01 crc kubenswrapper[4834]: I0121 15:37:01.554349 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:37:01 crc kubenswrapper[4834]: I0121 15:37:01.554856 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:37:02 crc kubenswrapper[4834]: I0121 15:37:02.829778 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-875t5" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerName="registry-server" probeResult="failure" output=< Jan 21 15:37:02 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 15:37:02 crc kubenswrapper[4834]: > Jan 21 15:37:05 crc kubenswrapper[4834]: I0121 15:37:05.219424 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnz99"] Jan 21 15:37:05 crc kubenswrapper[4834]: I0121 15:37:05.220082 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mnz99" podUID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerName="registry-server" containerID="cri-o://04b807b961c6d74aa5eb2f88ef1086d8c625cb6828faa2f3e2ebf08ffdfca0c5" gracePeriod=2 Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.306122 4834 generic.go:334] "Generic (PLEG): container finished" podID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerID="04b807b961c6d74aa5eb2f88ef1086d8c625cb6828faa2f3e2ebf08ffdfca0c5" exitCode=0 Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.306222 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnz99" event={"ID":"109b69ce-9c69-4249-adc3-7eef1f5edaa1","Type":"ContainerDied","Data":"04b807b961c6d74aa5eb2f88ef1086d8c625cb6828faa2f3e2ebf08ffdfca0c5"} Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.306552 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mnz99" event={"ID":"109b69ce-9c69-4249-adc3-7eef1f5edaa1","Type":"ContainerDied","Data":"4a508e7a21ebc3a062884a6d66bf837ff20f1b773d388217b8ca18da95920290"} Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.306574 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a508e7a21ebc3a062884a6d66bf837ff20f1b773d388217b8ca18da95920290" Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.341529 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.363090 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4ptr\" (UniqueName: \"kubernetes.io/projected/109b69ce-9c69-4249-adc3-7eef1f5edaa1-kube-api-access-k4ptr\") pod \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.363164 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-utilities\") pod \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.363231 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-catalog-content\") pod \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\" (UID: \"109b69ce-9c69-4249-adc3-7eef1f5edaa1\") " Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.366389 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-utilities" (OuterVolumeSpecName: "utilities") pod "109b69ce-9c69-4249-adc3-7eef1f5edaa1" (UID: "109b69ce-9c69-4249-adc3-7eef1f5edaa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.370430 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/109b69ce-9c69-4249-adc3-7eef1f5edaa1-kube-api-access-k4ptr" (OuterVolumeSpecName: "kube-api-access-k4ptr") pod "109b69ce-9c69-4249-adc3-7eef1f5edaa1" (UID: "109b69ce-9c69-4249-adc3-7eef1f5edaa1"). InnerVolumeSpecName "kube-api-access-k4ptr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.388835 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "109b69ce-9c69-4249-adc3-7eef1f5edaa1" (UID: "109b69ce-9c69-4249-adc3-7eef1f5edaa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.466093 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4ptr\" (UniqueName: \"kubernetes.io/projected/109b69ce-9c69-4249-adc3-7eef1f5edaa1-kube-api-access-k4ptr\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.466140 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:08 crc kubenswrapper[4834]: I0121 15:37:08.466178 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/109b69ce-9c69-4249-adc3-7eef1f5edaa1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:09 crc kubenswrapper[4834]: I0121 15:37:09.314583 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mnz99" Jan 21 15:37:09 crc kubenswrapper[4834]: I0121 15:37:09.350093 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnz99"] Jan 21 15:37:09 crc kubenswrapper[4834]: I0121 15:37:09.355437 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mnz99"] Jan 21 15:37:10 crc kubenswrapper[4834]: I0121 15:37:10.333315 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" path="/var/lib/kubelet/pods/109b69ce-9c69-4249-adc3-7eef1f5edaa1/volumes" Jan 21 15:37:11 crc kubenswrapper[4834]: I0121 15:37:11.596351 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:37:11 crc kubenswrapper[4834]: I0121 15:37:11.649771 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:37:12 crc kubenswrapper[4834]: I0121 15:37:12.221562 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-875t5"] Jan 21 15:37:13 crc kubenswrapper[4834]: I0121 15:37:13.348535 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-875t5" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerName="registry-server" containerID="cri-o://52e1337be41101326a81e3692c2acd28cc960fb8a0f53096648c50932c7ef958" gracePeriod=2 Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.366873 4834 generic.go:334] "Generic (PLEG): container finished" podID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerID="52e1337be41101326a81e3692c2acd28cc960fb8a0f53096648c50932c7ef958" exitCode=0 Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.367107 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-875t5" event={"ID":"1ff7d812-e06e-4dbb-ae14-977e025e51e7","Type":"ContainerDied","Data":"52e1337be41101326a81e3692c2acd28cc960fb8a0f53096648c50932c7ef958"} Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.367236 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-875t5" event={"ID":"1ff7d812-e06e-4dbb-ae14-977e025e51e7","Type":"ContainerDied","Data":"90d63870b429f2634d673ce6a8e0e068bb963495d2d1b925c785a931b27d33d4"} Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.367267 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d63870b429f2634d673ce6a8e0e068bb963495d2d1b925c785a931b27d33d4" Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.380754 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.452827 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cdmr\" (UniqueName: \"kubernetes.io/projected/1ff7d812-e06e-4dbb-ae14-977e025e51e7-kube-api-access-9cdmr\") pod \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.452918 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-catalog-content\") pod \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.453024 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-utilities\") pod \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\" (UID: \"1ff7d812-e06e-4dbb-ae14-977e025e51e7\") " Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.453847 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-utilities" (OuterVolumeSpecName: "utilities") pod "1ff7d812-e06e-4dbb-ae14-977e025e51e7" (UID: "1ff7d812-e06e-4dbb-ae14-977e025e51e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.459322 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff7d812-e06e-4dbb-ae14-977e025e51e7-kube-api-access-9cdmr" (OuterVolumeSpecName: "kube-api-access-9cdmr") pod "1ff7d812-e06e-4dbb-ae14-977e025e51e7" (UID: "1ff7d812-e06e-4dbb-ae14-977e025e51e7"). InnerVolumeSpecName "kube-api-access-9cdmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.555211 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.555248 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cdmr\" (UniqueName: \"kubernetes.io/projected/1ff7d812-e06e-4dbb-ae14-977e025e51e7-kube-api-access-9cdmr\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.587095 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ff7d812-e06e-4dbb-ae14-977e025e51e7" (UID: "1ff7d812-e06e-4dbb-ae14-977e025e51e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:37:14 crc kubenswrapper[4834]: I0121 15:37:14.655912 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff7d812-e06e-4dbb-ae14-977e025e51e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:37:15 crc kubenswrapper[4834]: I0121 15:37:15.373745 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-875t5" Jan 21 15:37:15 crc kubenswrapper[4834]: I0121 15:37:15.405500 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-875t5"] Jan 21 15:37:15 crc kubenswrapper[4834]: I0121 15:37:15.412646 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-875t5"] Jan 21 15:37:16 crc kubenswrapper[4834]: I0121 15:37:16.332681 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" path="/var/lib/kubelet/pods/1ff7d812-e06e-4dbb-ae14-977e025e51e7/volumes" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.373430 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5mr9x"] Jan 21 15:38:38 crc kubenswrapper[4834]: E0121 15:38:38.378961 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerName="registry-server" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.379052 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerName="registry-server" Jan 21 15:38:38 crc kubenswrapper[4834]: E0121 15:38:38.379078 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerName="extract-utilities" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.379095 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerName="extract-utilities" Jan 21 15:38:38 crc kubenswrapper[4834]: E0121 15:38:38.379121 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerName="extract-utilities" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.379135 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerName="extract-utilities" Jan 21 15:38:38 crc kubenswrapper[4834]: E0121 15:38:38.379158 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerName="extract-content" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.379171 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerName="extract-content" Jan 21 15:38:38 crc kubenswrapper[4834]: E0121 15:38:38.379200 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerName="extract-content" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.379211 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerName="extract-content" Jan 21 15:38:38 crc kubenswrapper[4834]: E0121 15:38:38.379233 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerName="registry-server" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.379245 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerName="registry-server" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.379509 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff7d812-e06e-4dbb-ae14-977e025e51e7" containerName="registry-server" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.379556 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="109b69ce-9c69-4249-adc3-7eef1f5edaa1" containerName="registry-server" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.424437 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.438703 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mr9x"] Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.489882 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-catalog-content\") pod \"certified-operators-5mr9x\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.489967 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ztvm\" (UniqueName: \"kubernetes.io/projected/05f863d3-6450-404b-9aec-6e340b888125-kube-api-access-9ztvm\") pod \"certified-operators-5mr9x\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.489994 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-utilities\") pod \"certified-operators-5mr9x\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.591717 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-catalog-content\") pod \"certified-operators-5mr9x\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.591788 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ztvm\" (UniqueName: \"kubernetes.io/projected/05f863d3-6450-404b-9aec-6e340b888125-kube-api-access-9ztvm\") pod \"certified-operators-5mr9x\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.591830 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-utilities\") pod \"certified-operators-5mr9x\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.592415 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-catalog-content\") pod \"certified-operators-5mr9x\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.592462 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-utilities\") pod \"certified-operators-5mr9x\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.613975 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ztvm\" (UniqueName: \"kubernetes.io/projected/05f863d3-6450-404b-9aec-6e340b888125-kube-api-access-9ztvm\") pod \"certified-operators-5mr9x\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:38 crc kubenswrapper[4834]: I0121 15:38:38.756667 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:39 crc kubenswrapper[4834]: I0121 15:38:39.249305 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mr9x"] Jan 21 15:38:40 crc kubenswrapper[4834]: I0121 15:38:40.056675 4834 generic.go:334] "Generic (PLEG): container finished" podID="05f863d3-6450-404b-9aec-6e340b888125" containerID="d244a87f187b7b1cb492e978cab03cc6c4f8d3f68ec457805544406ae377b226" exitCode=0 Jan 21 15:38:40 crc kubenswrapper[4834]: I0121 15:38:40.056739 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mr9x" event={"ID":"05f863d3-6450-404b-9aec-6e340b888125","Type":"ContainerDied","Data":"d244a87f187b7b1cb492e978cab03cc6c4f8d3f68ec457805544406ae377b226"} Jan 21 15:38:40 crc kubenswrapper[4834]: I0121 15:38:40.056779 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mr9x" event={"ID":"05f863d3-6450-404b-9aec-6e340b888125","Type":"ContainerStarted","Data":"043d5b8920e008c4460758643144848a332a81f3631a56656d8765b7074f3fdf"} Jan 21 15:38:41 crc kubenswrapper[4834]: I0121 15:38:41.070011 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mr9x" event={"ID":"05f863d3-6450-404b-9aec-6e340b888125","Type":"ContainerStarted","Data":"a78d50b631518bd4d0ee04ec19a462551630fd5ee2001f0807d09013a9fe1a45"} Jan 21 15:38:42 crc kubenswrapper[4834]: I0121 15:38:42.078630 4834 generic.go:334] "Generic (PLEG): container finished" podID="05f863d3-6450-404b-9aec-6e340b888125" containerID="a78d50b631518bd4d0ee04ec19a462551630fd5ee2001f0807d09013a9fe1a45" exitCode=0 Jan 21 15:38:42 crc kubenswrapper[4834]: I0121 15:38:42.078709 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mr9x" event={"ID":"05f863d3-6450-404b-9aec-6e340b888125","Type":"ContainerDied","Data":"a78d50b631518bd4d0ee04ec19a462551630fd5ee2001f0807d09013a9fe1a45"} Jan 21 15:38:43 crc kubenswrapper[4834]: I0121 15:38:43.088522 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mr9x" event={"ID":"05f863d3-6450-404b-9aec-6e340b888125","Type":"ContainerStarted","Data":"09d4959da8b2bddd7485bffd7709d0c365ca6c31f4d1febfb65ba44838313cd6"} Jan 21 15:38:43 crc kubenswrapper[4834]: I0121 15:38:43.109159 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5mr9x" podStartSLOduration=2.486469882 podStartE2EDuration="5.109135301s" podCreationTimestamp="2026-01-21 15:38:38 +0000 UTC" firstStartedPulling="2026-01-21 15:38:40.05837216 +0000 UTC m=+4066.032721205" lastFinishedPulling="2026-01-21 15:38:42.681037579 +0000 UTC m=+4068.655386624" observedRunningTime="2026-01-21 15:38:43.108878933 +0000 UTC m=+4069.083227998" watchObservedRunningTime="2026-01-21 15:38:43.109135301 +0000 UTC m=+4069.083484346" Jan 21 15:38:47 crc kubenswrapper[4834]: I0121 15:38:47.113777 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:38:47 crc kubenswrapper[4834]: I0121 15:38:47.115368 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:38:48 crc kubenswrapper[4834]: I0121 15:38:48.757647 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:48 crc kubenswrapper[4834]: I0121 15:38:48.758120 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:48 crc kubenswrapper[4834]: I0121 15:38:48.804987 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:49 crc kubenswrapper[4834]: I0121 15:38:49.170581 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:49 crc kubenswrapper[4834]: I0121 15:38:49.222988 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5mr9x"] Jan 21 15:38:51 crc kubenswrapper[4834]: I0121 15:38:51.149567 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5mr9x" podUID="05f863d3-6450-404b-9aec-6e340b888125" containerName="registry-server" containerID="cri-o://09d4959da8b2bddd7485bffd7709d0c365ca6c31f4d1febfb65ba44838313cd6" gracePeriod=2 Jan 21 15:38:53 crc kubenswrapper[4834]: I0121 15:38:53.165064 4834 generic.go:334] "Generic (PLEG): container finished" podID="05f863d3-6450-404b-9aec-6e340b888125" containerID="09d4959da8b2bddd7485bffd7709d0c365ca6c31f4d1febfb65ba44838313cd6" exitCode=0 Jan 21 15:38:53 crc kubenswrapper[4834]: I0121 15:38:53.165120 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mr9x" event={"ID":"05f863d3-6450-404b-9aec-6e340b888125","Type":"ContainerDied","Data":"09d4959da8b2bddd7485bffd7709d0c365ca6c31f4d1febfb65ba44838313cd6"} Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.488689 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.595837 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ztvm\" (UniqueName: \"kubernetes.io/projected/05f863d3-6450-404b-9aec-6e340b888125-kube-api-access-9ztvm\") pod \"05f863d3-6450-404b-9aec-6e340b888125\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.595984 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-utilities\") pod \"05f863d3-6450-404b-9aec-6e340b888125\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.596083 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-catalog-content\") pod \"05f863d3-6450-404b-9aec-6e340b888125\" (UID: \"05f863d3-6450-404b-9aec-6e340b888125\") " Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.597121 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-utilities" (OuterVolumeSpecName: "utilities") pod "05f863d3-6450-404b-9aec-6e340b888125" (UID: "05f863d3-6450-404b-9aec-6e340b888125"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.602051 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f863d3-6450-404b-9aec-6e340b888125-kube-api-access-9ztvm" (OuterVolumeSpecName: "kube-api-access-9ztvm") pod "05f863d3-6450-404b-9aec-6e340b888125" (UID: "05f863d3-6450-404b-9aec-6e340b888125"). InnerVolumeSpecName "kube-api-access-9ztvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.658412 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05f863d3-6450-404b-9aec-6e340b888125" (UID: "05f863d3-6450-404b-9aec-6e340b888125"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.697480 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.697526 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05f863d3-6450-404b-9aec-6e340b888125-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:54 crc kubenswrapper[4834]: I0121 15:38:54.697540 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ztvm\" (UniqueName: \"kubernetes.io/projected/05f863d3-6450-404b-9aec-6e340b888125-kube-api-access-9ztvm\") on node \"crc\" DevicePath \"\"" Jan 21 15:38:55 crc kubenswrapper[4834]: I0121 15:38:55.181157 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mr9x" event={"ID":"05f863d3-6450-404b-9aec-6e340b888125","Type":"ContainerDied","Data":"043d5b8920e008c4460758643144848a332a81f3631a56656d8765b7074f3fdf"} Jan 21 15:38:55 crc kubenswrapper[4834]: I0121 15:38:55.181212 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mr9x" Jan 21 15:38:55 crc kubenswrapper[4834]: I0121 15:38:55.181227 4834 scope.go:117] "RemoveContainer" containerID="09d4959da8b2bddd7485bffd7709d0c365ca6c31f4d1febfb65ba44838313cd6" Jan 21 15:38:55 crc kubenswrapper[4834]: I0121 15:38:55.201377 4834 scope.go:117] "RemoveContainer" containerID="a78d50b631518bd4d0ee04ec19a462551630fd5ee2001f0807d09013a9fe1a45" Jan 21 15:38:55 crc kubenswrapper[4834]: I0121 15:38:55.221837 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5mr9x"] Jan 21 15:38:55 crc kubenswrapper[4834]: I0121 15:38:55.229825 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5mr9x"] Jan 21 15:38:55 crc kubenswrapper[4834]: I0121 15:38:55.230369 4834 scope.go:117] "RemoveContainer" containerID="d244a87f187b7b1cb492e978cab03cc6c4f8d3f68ec457805544406ae377b226" Jan 21 15:38:56 crc kubenswrapper[4834]: I0121 15:38:56.339078 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f863d3-6450-404b-9aec-6e340b888125" path="/var/lib/kubelet/pods/05f863d3-6450-404b-9aec-6e340b888125/volumes" Jan 21 15:39:17 crc kubenswrapper[4834]: I0121 15:39:17.113508 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:39:17 crc kubenswrapper[4834]: I0121 15:39:17.114132 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:39:47 crc kubenswrapper[4834]: I0121 15:39:47.114223 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:39:47 crc kubenswrapper[4834]: I0121 15:39:47.114778 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:39:47 crc kubenswrapper[4834]: I0121 15:39:47.114829 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:39:47 crc kubenswrapper[4834]: I0121 15:39:47.115452 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32c0fc98c6f4dfe55524b7bb0795b15e9eb8f0524622033e4912a5970bb32049"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:39:47 crc kubenswrapper[4834]: I0121 15:39:47.115499 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://32c0fc98c6f4dfe55524b7bb0795b15e9eb8f0524622033e4912a5970bb32049" gracePeriod=600 Jan 21 15:39:47 crc kubenswrapper[4834]: I0121 15:39:47.559585 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="32c0fc98c6f4dfe55524b7bb0795b15e9eb8f0524622033e4912a5970bb32049" exitCode=0 Jan 21 15:39:47 crc kubenswrapper[4834]: I0121 15:39:47.559675 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"32c0fc98c6f4dfe55524b7bb0795b15e9eb8f0524622033e4912a5970bb32049"} Jan 21 15:39:47 crc kubenswrapper[4834]: I0121 15:39:47.560312 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e"} Jan 21 15:39:47 crc kubenswrapper[4834]: I0121 15:39:47.560436 4834 scope.go:117] "RemoveContainer" containerID="ce1a9a42d19876e3c8b7a197fe2d0b489bd3faf16f7bc4f6ce9bfebc8532b8c3" Jan 21 15:41:47 crc kubenswrapper[4834]: I0121 15:41:47.114878 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:41:47 crc kubenswrapper[4834]: I0121 15:41:47.116165 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:42:17 crc kubenswrapper[4834]: I0121 15:42:17.114173 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:42:17 crc kubenswrapper[4834]: I0121 15:42:17.114986 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:42:47 crc kubenswrapper[4834]: I0121 15:42:47.113572 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:42:47 crc kubenswrapper[4834]: I0121 15:42:47.114240 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:42:47 crc kubenswrapper[4834]: I0121 15:42:47.114287 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:42:47 crc kubenswrapper[4834]: I0121 15:42:47.114821 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:42:47 crc kubenswrapper[4834]: I0121 15:42:47.114882 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" gracePeriod=600 Jan 21 15:42:47 crc kubenswrapper[4834]: E0121 15:42:47.783504 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:42:47 crc kubenswrapper[4834]: I0121 15:42:47.988899 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" exitCode=0 Jan 21 15:42:47 crc kubenswrapper[4834]: I0121 15:42:47.988981 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e"} Jan 21 15:42:47 crc kubenswrapper[4834]: I0121 15:42:47.989078 4834 scope.go:117] "RemoveContainer" containerID="32c0fc98c6f4dfe55524b7bb0795b15e9eb8f0524622033e4912a5970bb32049" Jan 21 15:42:47 crc kubenswrapper[4834]: I0121 15:42:47.989806 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:42:47 crc kubenswrapper[4834]: E0121 15:42:47.990171 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:43:00 crc kubenswrapper[4834]: I0121 15:43:00.325231 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:43:00 crc kubenswrapper[4834]: E0121 15:43:00.326235 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:43:06 crc kubenswrapper[4834]: I0121 15:43:06.768412 4834 scope.go:117] "RemoveContainer" containerID="1656cc75ec5965329d3f46a5d583475f907df7cdb1af85177a4aa6121fc34223" Jan 21 15:43:06 crc kubenswrapper[4834]: I0121 15:43:06.807920 4834 scope.go:117] "RemoveContainer" containerID="97f0b61e7be9253474347c8691325ac737380f2abdd2dda0cec8c43b5c897d1b" Jan 21 15:43:06 crc kubenswrapper[4834]: I0121 15:43:06.848431 4834 scope.go:117] "RemoveContainer" containerID="daafbb73002800c803e238a012a5d403fbea1d0af52b98cd24af949c620e6f8a" Jan 21 15:43:06 crc kubenswrapper[4834]: I0121 15:43:06.883191 4834 scope.go:117] "RemoveContainer" containerID="4669b8f7a982ac6b2ae03a245e3c01de05595b874adb916ea99995e36672c7dd" Jan 21 15:43:06 crc kubenswrapper[4834]: I0121 15:43:06.911658 4834 scope.go:117] "RemoveContainer" containerID="52e1337be41101326a81e3692c2acd28cc960fb8a0f53096648c50932c7ef958" Jan 21 15:43:06 crc kubenswrapper[4834]: I0121 15:43:06.931659 4834 scope.go:117] "RemoveContainer" containerID="04b807b961c6d74aa5eb2f88ef1086d8c625cb6828faa2f3e2ebf08ffdfca0c5" Jan 21 15:43:11 crc kubenswrapper[4834]: I0121 15:43:11.324250 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:43:11 crc kubenswrapper[4834]: E0121 15:43:11.324744 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:43:23 crc kubenswrapper[4834]: I0121 15:43:23.324884 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:43:23 crc kubenswrapper[4834]: E0121 15:43:23.325835 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:43:35 crc kubenswrapper[4834]: I0121 15:43:35.325815 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:43:35 crc kubenswrapper[4834]: E0121 15:43:35.326572 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:43:48 crc kubenswrapper[4834]: I0121 15:43:48.324730 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:43:48 crc kubenswrapper[4834]: E0121 15:43:48.325563 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:43:59 crc kubenswrapper[4834]: I0121 15:43:59.325110 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:43:59 crc kubenswrapper[4834]: E0121 15:43:59.326163 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:44:11 crc kubenswrapper[4834]: I0121 15:44:11.324669 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:44:11 crc kubenswrapper[4834]: E0121 15:44:11.325643 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:44:25 crc kubenswrapper[4834]: I0121 15:44:25.325281 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:44:25 crc kubenswrapper[4834]: E0121 15:44:25.326093 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:44:39 crc kubenswrapper[4834]: I0121 15:44:39.324886 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:44:39 crc kubenswrapper[4834]: E0121 15:44:39.325659 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:44:50 crc kubenswrapper[4834]: I0121 15:44:50.324325 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:44:50 crc kubenswrapper[4834]: E0121 15:44:50.325302 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.180416 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg"] Jan 21 15:45:00 crc kubenswrapper[4834]: E0121 15:45:00.184170 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f863d3-6450-404b-9aec-6e340b888125" containerName="extract-utilities" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.184331 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f863d3-6450-404b-9aec-6e340b888125" containerName="extract-utilities" Jan 21 15:45:00 crc kubenswrapper[4834]: E0121 15:45:00.184459 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f863d3-6450-404b-9aec-6e340b888125" containerName="registry-server" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.184562 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f863d3-6450-404b-9aec-6e340b888125" containerName="registry-server" Jan 21 15:45:00 crc kubenswrapper[4834]: E0121 15:45:00.184678 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f863d3-6450-404b-9aec-6e340b888125" containerName="extract-content" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.184759 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f863d3-6450-404b-9aec-6e340b888125" containerName="extract-content" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.185245 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f863d3-6450-404b-9aec-6e340b888125" containerName="registry-server" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.188992 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.193070 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.193284 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.192916 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg"] Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.326683 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hldbt\" (UniqueName: \"kubernetes.io/projected/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-kube-api-access-hldbt\") pod \"collect-profiles-29483505-jb4pg\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.326745 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-config-volume\") pod \"collect-profiles-29483505-jb4pg\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.326815 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-secret-volume\") pod \"collect-profiles-29483505-jb4pg\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.428150 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-secret-volume\") pod \"collect-profiles-29483505-jb4pg\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.428261 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hldbt\" (UniqueName: \"kubernetes.io/projected/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-kube-api-access-hldbt\") pod \"collect-profiles-29483505-jb4pg\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.428310 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-config-volume\") pod \"collect-profiles-29483505-jb4pg\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.430712 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-config-volume\") pod \"collect-profiles-29483505-jb4pg\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.447387 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-secret-volume\") pod \"collect-profiles-29483505-jb4pg\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.451682 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hldbt\" (UniqueName: \"kubernetes.io/projected/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-kube-api-access-hldbt\") pod \"collect-profiles-29483505-jb4pg\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.512045 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:00 crc kubenswrapper[4834]: I0121 15:45:00.799022 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg"] Jan 21 15:45:01 crc kubenswrapper[4834]: I0121 15:45:01.044663 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" event={"ID":"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b","Type":"ContainerStarted","Data":"540cd33ebe3b7a899b0e327a4dd1c138bd3b0dc9ec028b8e66760d6c2fdb4fe7"} Jan 21 15:45:01 crc kubenswrapper[4834]: I0121 15:45:01.325496 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:45:01 crc kubenswrapper[4834]: E0121 15:45:01.325747 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:45:02 crc kubenswrapper[4834]: I0121 15:45:02.055563 4834 generic.go:334] "Generic (PLEG): container finished" podID="55e1fdf4-1cdc-40f8-9395-1e2416d06f5b" containerID="f74f807651cbc5cc8be3d9b845d6a3b93b18bf4be5052b8d8a6ba12437a1a606" exitCode=0 Jan 21 15:45:02 crc kubenswrapper[4834]: I0121 15:45:02.055793 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" event={"ID":"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b","Type":"ContainerDied","Data":"f74f807651cbc5cc8be3d9b845d6a3b93b18bf4be5052b8d8a6ba12437a1a606"} Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.369295 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.491154 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hldbt\" (UniqueName: \"kubernetes.io/projected/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-kube-api-access-hldbt\") pod \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.491371 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-secret-volume\") pod \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.491515 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-config-volume\") pod \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\" (UID: \"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b\") " Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.492676 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-config-volume" (OuterVolumeSpecName: "config-volume") pod "55e1fdf4-1cdc-40f8-9395-1e2416d06f5b" (UID: "55e1fdf4-1cdc-40f8-9395-1e2416d06f5b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.500046 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-kube-api-access-hldbt" (OuterVolumeSpecName: "kube-api-access-hldbt") pod "55e1fdf4-1cdc-40f8-9395-1e2416d06f5b" (UID: "55e1fdf4-1cdc-40f8-9395-1e2416d06f5b"). InnerVolumeSpecName "kube-api-access-hldbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.500813 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "55e1fdf4-1cdc-40f8-9395-1e2416d06f5b" (UID: "55e1fdf4-1cdc-40f8-9395-1e2416d06f5b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.595312 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.595379 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hldbt\" (UniqueName: \"kubernetes.io/projected/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-kube-api-access-hldbt\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:03 crc kubenswrapper[4834]: I0121 15:45:03.595404 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:45:04 crc kubenswrapper[4834]: I0121 15:45:04.073178 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" event={"ID":"55e1fdf4-1cdc-40f8-9395-1e2416d06f5b","Type":"ContainerDied","Data":"540cd33ebe3b7a899b0e327a4dd1c138bd3b0dc9ec028b8e66760d6c2fdb4fe7"} Jan 21 15:45:04 crc kubenswrapper[4834]: I0121 15:45:04.073229 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg" Jan 21 15:45:04 crc kubenswrapper[4834]: I0121 15:45:04.073247 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="540cd33ebe3b7a899b0e327a4dd1c138bd3b0dc9ec028b8e66760d6c2fdb4fe7" Jan 21 15:45:04 crc kubenswrapper[4834]: I0121 15:45:04.482864 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx"] Jan 21 15:45:04 crc kubenswrapper[4834]: I0121 15:45:04.490327 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483460-mwdvx"] Jan 21 15:45:06 crc kubenswrapper[4834]: I0121 15:45:06.337682 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d059b54f-a3da-493d-84fb-9dd98acbe092" path="/var/lib/kubelet/pods/d059b54f-a3da-493d-84fb-9dd98acbe092/volumes" Jan 21 15:45:06 crc kubenswrapper[4834]: I0121 15:45:06.987734 4834 scope.go:117] "RemoveContainer" containerID="6f6da3293e2874a884f7df825aecd3f3bf71a0d820a28db01169c516b2077b94" Jan 21 15:45:15 crc kubenswrapper[4834]: I0121 15:45:15.324724 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:45:15 crc kubenswrapper[4834]: E0121 15:45:15.325438 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:45:27 crc kubenswrapper[4834]: I0121 15:45:27.325414 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:45:27 crc kubenswrapper[4834]: E0121 15:45:27.326325 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:45:38 crc kubenswrapper[4834]: I0121 15:45:38.325184 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:45:38 crc kubenswrapper[4834]: E0121 15:45:38.326848 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:45:51 crc kubenswrapper[4834]: I0121 15:45:51.324859 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:45:51 crc kubenswrapper[4834]: E0121 15:45:51.325669 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:46:02 crc kubenswrapper[4834]: I0121 15:46:02.325655 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:46:02 crc kubenswrapper[4834]: E0121 15:46:02.330414 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:46:13 crc kubenswrapper[4834]: I0121 15:46:13.326615 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:46:13 crc kubenswrapper[4834]: E0121 15:46:13.330136 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:46:25 crc kubenswrapper[4834]: I0121 15:46:25.325430 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:46:25 crc kubenswrapper[4834]: E0121 15:46:25.326676 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:46:39 crc kubenswrapper[4834]: I0121 15:46:39.324869 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:46:39 crc kubenswrapper[4834]: E0121 15:46:39.325792 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:46:52 crc kubenswrapper[4834]: I0121 15:46:52.324768 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:46:52 crc kubenswrapper[4834]: E0121 15:46:52.325561 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:47:06 crc kubenswrapper[4834]: I0121 15:47:06.324679 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:47:06 crc kubenswrapper[4834]: E0121 15:47:06.325490 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.820226 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-nfdv4"] Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.825253 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-nfdv4"] Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.947838 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-fskd2"] Jan 21 15:47:11 crc kubenswrapper[4834]: E0121 15:47:11.948278 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55e1fdf4-1cdc-40f8-9395-1e2416d06f5b" containerName="collect-profiles" Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.948303 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="55e1fdf4-1cdc-40f8-9395-1e2416d06f5b" containerName="collect-profiles" Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.948569 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="55e1fdf4-1cdc-40f8-9395-1e2416d06f5b" containerName="collect-profiles" Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.949394 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.952350 4834 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-j5dvb" Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.952851 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.953305 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.954340 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 15:47:11 crc kubenswrapper[4834]: I0121 15:47:11.957379 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-fskd2"] Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.060034 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/82c0c914-0516-4994-b148-4c8e965ebe91-node-mnt\") pod \"crc-storage-crc-fskd2\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.060143 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h84lq\" (UniqueName: \"kubernetes.io/projected/82c0c914-0516-4994-b148-4c8e965ebe91-kube-api-access-h84lq\") pod \"crc-storage-crc-fskd2\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.060173 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/82c0c914-0516-4994-b148-4c8e965ebe91-crc-storage\") pod \"crc-storage-crc-fskd2\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.161792 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/82c0c914-0516-4994-b148-4c8e965ebe91-node-mnt\") pod \"crc-storage-crc-fskd2\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.161873 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h84lq\" (UniqueName: \"kubernetes.io/projected/82c0c914-0516-4994-b148-4c8e965ebe91-kube-api-access-h84lq\") pod \"crc-storage-crc-fskd2\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.161900 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/82c0c914-0516-4994-b148-4c8e965ebe91-crc-storage\") pod \"crc-storage-crc-fskd2\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.162410 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/82c0c914-0516-4994-b148-4c8e965ebe91-node-mnt\") pod \"crc-storage-crc-fskd2\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.163224 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/82c0c914-0516-4994-b148-4c8e965ebe91-crc-storage\") pod \"crc-storage-crc-fskd2\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.186399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h84lq\" (UniqueName: \"kubernetes.io/projected/82c0c914-0516-4994-b148-4c8e965ebe91-kube-api-access-h84lq\") pod \"crc-storage-crc-fskd2\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.282437 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.336827 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c59b7c-e36a-44b3-a34a-16939ae1ccb9" path="/var/lib/kubelet/pods/f7c59b7c-e36a-44b3-a34a-16939ae1ccb9/volumes" Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.745521 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-fskd2"] Jan 21 15:47:12 crc kubenswrapper[4834]: I0121 15:47:12.755855 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:47:13 crc kubenswrapper[4834]: I0121 15:47:13.109395 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fskd2" event={"ID":"82c0c914-0516-4994-b148-4c8e965ebe91","Type":"ContainerStarted","Data":"dd4cd3ce5e2e9c92c5b5f3a7bcd6f74f3648b5e3c6dceab0987d2c2d535566f6"} Jan 21 15:47:14 crc kubenswrapper[4834]: I0121 15:47:14.118259 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fskd2" event={"ID":"82c0c914-0516-4994-b148-4c8e965ebe91","Type":"ContainerStarted","Data":"98b9cd38bbbcf3404d46767ddddd15aae4ef1a6dae3833317899242a09b1e2aa"} Jan 21 15:47:15 crc kubenswrapper[4834]: I0121 15:47:15.125991 4834 generic.go:334] "Generic (PLEG): container finished" podID="82c0c914-0516-4994-b148-4c8e965ebe91" containerID="98b9cd38bbbcf3404d46767ddddd15aae4ef1a6dae3833317899242a09b1e2aa" exitCode=0 Jan 21 15:47:15 crc kubenswrapper[4834]: I0121 15:47:15.126049 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fskd2" event={"ID":"82c0c914-0516-4994-b148-4c8e965ebe91","Type":"ContainerDied","Data":"98b9cd38bbbcf3404d46767ddddd15aae4ef1a6dae3833317899242a09b1e2aa"} Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.460976 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.548267 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/82c0c914-0516-4994-b148-4c8e965ebe91-node-mnt\") pod \"82c0c914-0516-4994-b148-4c8e965ebe91\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.548574 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82c0c914-0516-4994-b148-4c8e965ebe91-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "82c0c914-0516-4994-b148-4c8e965ebe91" (UID: "82c0c914-0516-4994-b148-4c8e965ebe91"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.548900 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/82c0c914-0516-4994-b148-4c8e965ebe91-crc-storage\") pod \"82c0c914-0516-4994-b148-4c8e965ebe91\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.548997 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h84lq\" (UniqueName: \"kubernetes.io/projected/82c0c914-0516-4994-b148-4c8e965ebe91-kube-api-access-h84lq\") pod \"82c0c914-0516-4994-b148-4c8e965ebe91\" (UID: \"82c0c914-0516-4994-b148-4c8e965ebe91\") " Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.549776 4834 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/82c0c914-0516-4994-b148-4c8e965ebe91-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.554798 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c0c914-0516-4994-b148-4c8e965ebe91-kube-api-access-h84lq" (OuterVolumeSpecName: "kube-api-access-h84lq") pod "82c0c914-0516-4994-b148-4c8e965ebe91" (UID: "82c0c914-0516-4994-b148-4c8e965ebe91"). InnerVolumeSpecName "kube-api-access-h84lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.566466 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c0c914-0516-4994-b148-4c8e965ebe91-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "82c0c914-0516-4994-b148-4c8e965ebe91" (UID: "82c0c914-0516-4994-b148-4c8e965ebe91"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.650999 4834 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/82c0c914-0516-4994-b148-4c8e965ebe91-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:16 crc kubenswrapper[4834]: I0121 15:47:16.651334 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h84lq\" (UniqueName: \"kubernetes.io/projected/82c0c914-0516-4994-b148-4c8e965ebe91-kube-api-access-h84lq\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:17 crc kubenswrapper[4834]: I0121 15:47:17.143240 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fskd2" event={"ID":"82c0c914-0516-4994-b148-4c8e965ebe91","Type":"ContainerDied","Data":"dd4cd3ce5e2e9c92c5b5f3a7bcd6f74f3648b5e3c6dceab0987d2c2d535566f6"} Jan 21 15:47:17 crc kubenswrapper[4834]: I0121 15:47:17.143285 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4cd3ce5e2e9c92c5b5f3a7bcd6f74f3648b5e3c6dceab0987d2c2d535566f6" Jan 21 15:47:17 crc kubenswrapper[4834]: I0121 15:47:17.143354 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fskd2" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.265353 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-fskd2"] Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.269871 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-fskd2"] Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.325263 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:47:18 crc kubenswrapper[4834]: E0121 15:47:18.325503 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.334630 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82c0c914-0516-4994-b148-4c8e965ebe91" path="/var/lib/kubelet/pods/82c0c914-0516-4994-b148-4c8e965ebe91/volumes" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.411738 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-jxl4m"] Jan 21 15:47:18 crc kubenswrapper[4834]: E0121 15:47:18.412148 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c0c914-0516-4994-b148-4c8e965ebe91" containerName="storage" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.412172 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c0c914-0516-4994-b148-4c8e965ebe91" containerName="storage" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.412363 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c0c914-0516-4994-b148-4c8e965ebe91" containerName="storage" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.412869 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.415792 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.416053 4834 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-j5dvb" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.416091 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.416374 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.430840 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jxl4m"] Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.577160 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56gpv\" (UniqueName: \"kubernetes.io/projected/8ce5c74f-8cde-439d-b88e-540d56be1048-kube-api-access-56gpv\") pod \"crc-storage-crc-jxl4m\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.578043 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8ce5c74f-8cde-439d-b88e-540d56be1048-node-mnt\") pod \"crc-storage-crc-jxl4m\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.578248 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8ce5c74f-8cde-439d-b88e-540d56be1048-crc-storage\") pod \"crc-storage-crc-jxl4m\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.679466 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56gpv\" (UniqueName: \"kubernetes.io/projected/8ce5c74f-8cde-439d-b88e-540d56be1048-kube-api-access-56gpv\") pod \"crc-storage-crc-jxl4m\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.679594 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8ce5c74f-8cde-439d-b88e-540d56be1048-node-mnt\") pod \"crc-storage-crc-jxl4m\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.679671 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8ce5c74f-8cde-439d-b88e-540d56be1048-crc-storage\") pod \"crc-storage-crc-jxl4m\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.680130 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8ce5c74f-8cde-439d-b88e-540d56be1048-node-mnt\") pod \"crc-storage-crc-jxl4m\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.680981 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8ce5c74f-8cde-439d-b88e-540d56be1048-crc-storage\") pod \"crc-storage-crc-jxl4m\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.701735 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56gpv\" (UniqueName: \"kubernetes.io/projected/8ce5c74f-8cde-439d-b88e-540d56be1048-kube-api-access-56gpv\") pod \"crc-storage-crc-jxl4m\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:18 crc kubenswrapper[4834]: I0121 15:47:18.783127 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:19 crc kubenswrapper[4834]: I0121 15:47:19.202297 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jxl4m"] Jan 21 15:47:20 crc kubenswrapper[4834]: I0121 15:47:20.165801 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jxl4m" event={"ID":"8ce5c74f-8cde-439d-b88e-540d56be1048","Type":"ContainerStarted","Data":"b9f01c50e1c04a7ff91d54787826a4bd03abee6dd85c1276f0095fe59cfa3155"} Jan 21 15:47:21 crc kubenswrapper[4834]: I0121 15:47:21.176224 4834 generic.go:334] "Generic (PLEG): container finished" podID="8ce5c74f-8cde-439d-b88e-540d56be1048" containerID="43932e6751d84ca296ef0ee5d741ea917095cc576c837e7ccb3fbab031b6f685" exitCode=0 Jan 21 15:47:21 crc kubenswrapper[4834]: I0121 15:47:21.176267 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jxl4m" event={"ID":"8ce5c74f-8cde-439d-b88e-540d56be1048","Type":"ContainerDied","Data":"43932e6751d84ca296ef0ee5d741ea917095cc576c837e7ccb3fbab031b6f685"} Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.506989 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.542609 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8ce5c74f-8cde-439d-b88e-540d56be1048-node-mnt\") pod \"8ce5c74f-8cde-439d-b88e-540d56be1048\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.542713 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56gpv\" (UniqueName: \"kubernetes.io/projected/8ce5c74f-8cde-439d-b88e-540d56be1048-kube-api-access-56gpv\") pod \"8ce5c74f-8cde-439d-b88e-540d56be1048\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.542774 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ce5c74f-8cde-439d-b88e-540d56be1048-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "8ce5c74f-8cde-439d-b88e-540d56be1048" (UID: "8ce5c74f-8cde-439d-b88e-540d56be1048"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.542902 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8ce5c74f-8cde-439d-b88e-540d56be1048-crc-storage\") pod \"8ce5c74f-8cde-439d-b88e-540d56be1048\" (UID: \"8ce5c74f-8cde-439d-b88e-540d56be1048\") " Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.543262 4834 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/8ce5c74f-8cde-439d-b88e-540d56be1048-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.557555 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce5c74f-8cde-439d-b88e-540d56be1048-kube-api-access-56gpv" (OuterVolumeSpecName: "kube-api-access-56gpv") pod "8ce5c74f-8cde-439d-b88e-540d56be1048" (UID: "8ce5c74f-8cde-439d-b88e-540d56be1048"). InnerVolumeSpecName "kube-api-access-56gpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.565547 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce5c74f-8cde-439d-b88e-540d56be1048-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "8ce5c74f-8cde-439d-b88e-540d56be1048" (UID: "8ce5c74f-8cde-439d-b88e-540d56be1048"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.645099 4834 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/8ce5c74f-8cde-439d-b88e-540d56be1048-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:22 crc kubenswrapper[4834]: I0121 15:47:22.645143 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56gpv\" (UniqueName: \"kubernetes.io/projected/8ce5c74f-8cde-439d-b88e-540d56be1048-kube-api-access-56gpv\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:23 crc kubenswrapper[4834]: I0121 15:47:23.202365 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jxl4m" event={"ID":"8ce5c74f-8cde-439d-b88e-540d56be1048","Type":"ContainerDied","Data":"b9f01c50e1c04a7ff91d54787826a4bd03abee6dd85c1276f0095fe59cfa3155"} Jan 21 15:47:23 crc kubenswrapper[4834]: I0121 15:47:23.202965 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f01c50e1c04a7ff91d54787826a4bd03abee6dd85c1276f0095fe59cfa3155" Jan 21 15:47:23 crc kubenswrapper[4834]: I0121 15:47:23.202862 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jxl4m" Jan 21 15:47:31 crc kubenswrapper[4834]: I0121 15:47:31.324082 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:47:31 crc kubenswrapper[4834]: E0121 15:47:31.326260 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.261541 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rjjml"] Jan 21 15:47:38 crc kubenswrapper[4834]: E0121 15:47:38.262723 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce5c74f-8cde-439d-b88e-540d56be1048" containerName="storage" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.262746 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce5c74f-8cde-439d-b88e-540d56be1048" containerName="storage" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.264261 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce5c74f-8cde-439d-b88e-540d56be1048" containerName="storage" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.265661 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.352811 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lncdd\" (UniqueName: \"kubernetes.io/projected/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-kube-api-access-lncdd\") pod \"redhat-operators-rjjml\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.355042 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-catalog-content\") pod \"redhat-operators-rjjml\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.355394 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-utilities\") pod \"redhat-operators-rjjml\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.370330 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjjml"] Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.458345 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-utilities\") pod \"redhat-operators-rjjml\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.458474 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lncdd\" (UniqueName: \"kubernetes.io/projected/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-kube-api-access-lncdd\") pod \"redhat-operators-rjjml\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.458523 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-catalog-content\") pod \"redhat-operators-rjjml\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.459190 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-utilities\") pod \"redhat-operators-rjjml\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.460336 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-catalog-content\") pod \"redhat-operators-rjjml\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.492200 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lncdd\" (UniqueName: \"kubernetes.io/projected/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-kube-api-access-lncdd\") pod \"redhat-operators-rjjml\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:38 crc kubenswrapper[4834]: I0121 15:47:38.650657 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:39 crc kubenswrapper[4834]: I0121 15:47:39.150822 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjjml"] Jan 21 15:47:40 crc kubenswrapper[4834]: I0121 15:47:40.386063 4834 generic.go:334] "Generic (PLEG): container finished" podID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerID="bf4f3850539314c1314748b67dc4284aa801949fa9f0c54a9cdf7ab555d5ddb4" exitCode=0 Jan 21 15:47:40 crc kubenswrapper[4834]: I0121 15:47:40.386147 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjjml" event={"ID":"a46e21eb-bc91-44ae-9e2d-2b552691d5f5","Type":"ContainerDied","Data":"bf4f3850539314c1314748b67dc4284aa801949fa9f0c54a9cdf7ab555d5ddb4"} Jan 21 15:47:40 crc kubenswrapper[4834]: I0121 15:47:40.386678 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjjml" event={"ID":"a46e21eb-bc91-44ae-9e2d-2b552691d5f5","Type":"ContainerStarted","Data":"fa357d7747c141a2f141957ad897e3d882c42c25c4844cec1e3657c4ad23a728"} Jan 21 15:47:41 crc kubenswrapper[4834]: I0121 15:47:41.843158 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xxfc7"] Jan 21 15:47:41 crc kubenswrapper[4834]: I0121 15:47:41.846990 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:41 crc kubenswrapper[4834]: I0121 15:47:41.856955 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xxfc7"] Jan 21 15:47:41 crc kubenswrapper[4834]: I0121 15:47:41.921025 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-utilities\") pod \"community-operators-xxfc7\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:41 crc kubenswrapper[4834]: I0121 15:47:41.921622 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpqh8\" (UniqueName: \"kubernetes.io/projected/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-kube-api-access-vpqh8\") pod \"community-operators-xxfc7\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:41 crc kubenswrapper[4834]: I0121 15:47:41.921665 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-catalog-content\") pod \"community-operators-xxfc7\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.023547 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpqh8\" (UniqueName: \"kubernetes.io/projected/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-kube-api-access-vpqh8\") pod \"community-operators-xxfc7\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.023641 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-catalog-content\") pod \"community-operators-xxfc7\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.023693 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-utilities\") pod \"community-operators-xxfc7\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.024468 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-utilities\") pod \"community-operators-xxfc7\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.024526 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-catalog-content\") pod \"community-operators-xxfc7\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.050534 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpqh8\" (UniqueName: \"kubernetes.io/projected/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-kube-api-access-vpqh8\") pod \"community-operators-xxfc7\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.192010 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.328732 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:47:42 crc kubenswrapper[4834]: E0121 15:47:42.329006 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.440405 4834 generic.go:334] "Generic (PLEG): container finished" podID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerID="c3a5e0e2f34f9810689b81ba922a9bbdf76459ac25bdd9f1dfd9585ddac0be9e" exitCode=0 Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.440972 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjjml" event={"ID":"a46e21eb-bc91-44ae-9e2d-2b552691d5f5","Type":"ContainerDied","Data":"c3a5e0e2f34f9810689b81ba922a9bbdf76459ac25bdd9f1dfd9585ddac0be9e"} Jan 21 15:47:42 crc kubenswrapper[4834]: I0121 15:47:42.697889 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xxfc7"] Jan 21 15:47:43 crc kubenswrapper[4834]: I0121 15:47:43.453720 4834 generic.go:334] "Generic (PLEG): container finished" podID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerID="d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633" exitCode=0 Jan 21 15:47:43 crc kubenswrapper[4834]: I0121 15:47:43.453827 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxfc7" event={"ID":"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1","Type":"ContainerDied","Data":"d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633"} Jan 21 15:47:43 crc kubenswrapper[4834]: I0121 15:47:43.454313 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxfc7" event={"ID":"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1","Type":"ContainerStarted","Data":"05dccf03c16599061fc06de19f97592e8973a23f2b9941eb36f5654956f3fc77"} Jan 21 15:47:43 crc kubenswrapper[4834]: I0121 15:47:43.460013 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjjml" event={"ID":"a46e21eb-bc91-44ae-9e2d-2b552691d5f5","Type":"ContainerStarted","Data":"3e9578ff3b8ead32b3e13452c3338297c14332c743285125e3a3ee87fa2cfbee"} Jan 21 15:47:43 crc kubenswrapper[4834]: I0121 15:47:43.507684 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rjjml" podStartSLOduration=2.9520986970000003 podStartE2EDuration="5.507628836s" podCreationTimestamp="2026-01-21 15:47:38 +0000 UTC" firstStartedPulling="2026-01-21 15:47:40.389337473 +0000 UTC m=+4606.363686518" lastFinishedPulling="2026-01-21 15:47:42.944867612 +0000 UTC m=+4608.919216657" observedRunningTime="2026-01-21 15:47:43.503113514 +0000 UTC m=+4609.477462559" watchObservedRunningTime="2026-01-21 15:47:43.507628836 +0000 UTC m=+4609.481977871" Jan 21 15:47:44 crc kubenswrapper[4834]: I0121 15:47:44.473589 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxfc7" event={"ID":"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1","Type":"ContainerStarted","Data":"22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341"} Jan 21 15:47:45 crc kubenswrapper[4834]: I0121 15:47:45.488083 4834 generic.go:334] "Generic (PLEG): container finished" podID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerID="22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341" exitCode=0 Jan 21 15:47:45 crc kubenswrapper[4834]: I0121 15:47:45.488207 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxfc7" event={"ID":"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1","Type":"ContainerDied","Data":"22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341"} Jan 21 15:47:46 crc kubenswrapper[4834]: I0121 15:47:46.499526 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxfc7" event={"ID":"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1","Type":"ContainerStarted","Data":"63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc"} Jan 21 15:47:46 crc kubenswrapper[4834]: I0121 15:47:46.543834 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xxfc7" podStartSLOduration=3.091530708 podStartE2EDuration="5.543800505s" podCreationTimestamp="2026-01-21 15:47:41 +0000 UTC" firstStartedPulling="2026-01-21 15:47:43.456460268 +0000 UTC m=+4609.430809333" lastFinishedPulling="2026-01-21 15:47:45.908730085 +0000 UTC m=+4611.883079130" observedRunningTime="2026-01-21 15:47:46.537906951 +0000 UTC m=+4612.512256016" watchObservedRunningTime="2026-01-21 15:47:46.543800505 +0000 UTC m=+4612.518149550" Jan 21 15:47:48 crc kubenswrapper[4834]: I0121 15:47:48.651369 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:48 crc kubenswrapper[4834]: I0121 15:47:48.651486 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:48 crc kubenswrapper[4834]: I0121 15:47:48.705532 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:49 crc kubenswrapper[4834]: I0121 15:47:49.572999 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:49 crc kubenswrapper[4834]: I0121 15:47:49.821078 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjjml"] Jan 21 15:47:51 crc kubenswrapper[4834]: I0121 15:47:51.539405 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rjjml" podUID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerName="registry-server" containerID="cri-o://3e9578ff3b8ead32b3e13452c3338297c14332c743285125e3a3ee87fa2cfbee" gracePeriod=2 Jan 21 15:47:52 crc kubenswrapper[4834]: I0121 15:47:52.193096 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:52 crc kubenswrapper[4834]: I0121 15:47:52.193425 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:52 crc kubenswrapper[4834]: I0121 15:47:52.549662 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:52 crc kubenswrapper[4834]: I0121 15:47:52.609291 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:53 crc kubenswrapper[4834]: I0121 15:47:53.431126 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xxfc7"] Jan 21 15:47:54 crc kubenswrapper[4834]: I0121 15:47:54.328419 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:47:54 crc kubenswrapper[4834]: I0121 15:47:54.568256 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"dcb83c48b1306784ab7a9755c35c5dd73199c7ff7870be731322df946b6513c0"} Jan 21 15:47:54 crc kubenswrapper[4834]: I0121 15:47:54.571826 4834 generic.go:334] "Generic (PLEG): container finished" podID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerID="3e9578ff3b8ead32b3e13452c3338297c14332c743285125e3a3ee87fa2cfbee" exitCode=0 Jan 21 15:47:54 crc kubenswrapper[4834]: I0121 15:47:54.571894 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjjml" event={"ID":"a46e21eb-bc91-44ae-9e2d-2b552691d5f5","Type":"ContainerDied","Data":"3e9578ff3b8ead32b3e13452c3338297c14332c743285125e3a3ee87fa2cfbee"} Jan 21 15:47:54 crc kubenswrapper[4834]: I0121 15:47:54.572462 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xxfc7" podUID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerName="registry-server" containerID="cri-o://63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc" gracePeriod=2 Jan 21 15:47:54 crc kubenswrapper[4834]: I0121 15:47:54.991049 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.058175 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpqh8\" (UniqueName: \"kubernetes.io/projected/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-kube-api-access-vpqh8\") pod \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.058582 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-catalog-content\") pod \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.058739 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-utilities\") pod \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\" (UID: \"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1\") " Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.059609 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-utilities" (OuterVolumeSpecName: "utilities") pod "47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" (UID: "47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.065121 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-kube-api-access-vpqh8" (OuterVolumeSpecName: "kube-api-access-vpqh8") pod "47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" (UID: "47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1"). InnerVolumeSpecName "kube-api-access-vpqh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.126544 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" (UID: "47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.159996 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpqh8\" (UniqueName: \"kubernetes.io/projected/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-kube-api-access-vpqh8\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.160052 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.160065 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.584554 4834 generic.go:334] "Generic (PLEG): container finished" podID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerID="63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc" exitCode=0 Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.584598 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxfc7" event={"ID":"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1","Type":"ContainerDied","Data":"63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc"} Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.584625 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxfc7" event={"ID":"47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1","Type":"ContainerDied","Data":"05dccf03c16599061fc06de19f97592e8973a23f2b9941eb36f5654956f3fc77"} Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.584643 4834 scope.go:117] "RemoveContainer" containerID="63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.584644 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxfc7" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.606395 4834 scope.go:117] "RemoveContainer" containerID="22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.625244 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xxfc7"] Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.634534 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xxfc7"] Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.650611 4834 scope.go:117] "RemoveContainer" containerID="d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.673913 4834 scope.go:117] "RemoveContainer" containerID="63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc" Jan 21 15:47:55 crc kubenswrapper[4834]: E0121 15:47:55.674869 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc\": container with ID starting with 63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc not found: ID does not exist" containerID="63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.674985 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc"} err="failed to get container status \"63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc\": rpc error: code = NotFound desc = could not find container \"63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc\": container with ID starting with 63d1ad3eb3c67b9e4ed8123676013a66f7de7a1482784f4de0706ddcefb89dfc not found: ID does not exist" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.675037 4834 scope.go:117] "RemoveContainer" containerID="22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341" Jan 21 15:47:55 crc kubenswrapper[4834]: E0121 15:47:55.679097 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341\": container with ID starting with 22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341 not found: ID does not exist" containerID="22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.679161 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341"} err="failed to get container status \"22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341\": rpc error: code = NotFound desc = could not find container \"22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341\": container with ID starting with 22a6edc04e1befb693e481f4300e426bdf9751a2c02bae7e80e909eb0c9a4341 not found: ID does not exist" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.679196 4834 scope.go:117] "RemoveContainer" containerID="d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633" Jan 21 15:47:55 crc kubenswrapper[4834]: E0121 15:47:55.679716 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633\": container with ID starting with d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633 not found: ID does not exist" containerID="d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.679744 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633"} err="failed to get container status \"d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633\": rpc error: code = NotFound desc = could not find container \"d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633\": container with ID starting with d3e8d8f8928ab1eee35818b4ce9f093d88ac4dffc54b0c2cb5f91771b670c633 not found: ID does not exist" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.938318 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.978469 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lncdd\" (UniqueName: \"kubernetes.io/projected/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-kube-api-access-lncdd\") pod \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.978566 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-catalog-content\") pod \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.978710 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-utilities\") pod \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\" (UID: \"a46e21eb-bc91-44ae-9e2d-2b552691d5f5\") " Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.979843 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-utilities" (OuterVolumeSpecName: "utilities") pod "a46e21eb-bc91-44ae-9e2d-2b552691d5f5" (UID: "a46e21eb-bc91-44ae-9e2d-2b552691d5f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:55 crc kubenswrapper[4834]: I0121 15:47:55.983117 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-kube-api-access-lncdd" (OuterVolumeSpecName: "kube-api-access-lncdd") pod "a46e21eb-bc91-44ae-9e2d-2b552691d5f5" (UID: "a46e21eb-bc91-44ae-9e2d-2b552691d5f5"). InnerVolumeSpecName "kube-api-access-lncdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.080479 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.080533 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lncdd\" (UniqueName: \"kubernetes.io/projected/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-kube-api-access-lncdd\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.113159 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a46e21eb-bc91-44ae-9e2d-2b552691d5f5" (UID: "a46e21eb-bc91-44ae-9e2d-2b552691d5f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.182484 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a46e21eb-bc91-44ae-9e2d-2b552691d5f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.336504 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" path="/var/lib/kubelet/pods/47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1/volumes" Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.599860 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjjml" event={"ID":"a46e21eb-bc91-44ae-9e2d-2b552691d5f5","Type":"ContainerDied","Data":"fa357d7747c141a2f141957ad897e3d882c42c25c4844cec1e3657c4ad23a728"} Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.599914 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjjml" Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.599987 4834 scope.go:117] "RemoveContainer" containerID="3e9578ff3b8ead32b3e13452c3338297c14332c743285125e3a3ee87fa2cfbee" Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.628085 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjjml"] Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.628752 4834 scope.go:117] "RemoveContainer" containerID="c3a5e0e2f34f9810689b81ba922a9bbdf76459ac25bdd9f1dfd9585ddac0be9e" Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.633363 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rjjml"] Jan 21 15:47:56 crc kubenswrapper[4834]: I0121 15:47:56.671476 4834 scope.go:117] "RemoveContainer" containerID="bf4f3850539314c1314748b67dc4284aa801949fa9f0c54a9cdf7ab555d5ddb4" Jan 21 15:47:58 crc kubenswrapper[4834]: I0121 15:47:58.335145 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" path="/var/lib/kubelet/pods/a46e21eb-bc91-44ae-9e2d-2b552691d5f5/volumes" Jan 21 15:48:07 crc kubenswrapper[4834]: I0121 15:48:07.067299 4834 scope.go:117] "RemoveContainer" containerID="4a1dc79cb8a892c1c45b57bc320caefdcc8e50182e67033f286fa23bfd71dcd2" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.293853 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2z92v"] Jan 21 15:48:51 crc kubenswrapper[4834]: E0121 15:48:51.295289 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerName="registry-server" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.295313 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerName="registry-server" Jan 21 15:48:51 crc kubenswrapper[4834]: E0121 15:48:51.295334 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerName="extract-utilities" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.295341 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerName="extract-utilities" Jan 21 15:48:51 crc kubenswrapper[4834]: E0121 15:48:51.295358 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerName="extract-content" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.295366 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerName="extract-content" Jan 21 15:48:51 crc kubenswrapper[4834]: E0121 15:48:51.295394 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerName="extract-utilities" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.295401 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerName="extract-utilities" Jan 21 15:48:51 crc kubenswrapper[4834]: E0121 15:48:51.295417 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerName="extract-content" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.295424 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerName="extract-content" Jan 21 15:48:51 crc kubenswrapper[4834]: E0121 15:48:51.295439 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerName="registry-server" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.295445 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerName="registry-server" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.295709 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a46e21eb-bc91-44ae-9e2d-2b552691d5f5" containerName="registry-server" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.295733 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a1a4b2-6c6b-4e69-8644-0ccb957ab4e1" containerName="registry-server" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.299703 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.362547 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z92v"] Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.423092 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-catalog-content\") pod \"redhat-marketplace-2z92v\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.423412 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-utilities\") pod \"redhat-marketplace-2z92v\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.423789 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rj8\" (UniqueName: \"kubernetes.io/projected/c31cb131-c9c5-4ead-83f7-7a129c57561e-kube-api-access-z2rj8\") pod \"redhat-marketplace-2z92v\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.525231 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rj8\" (UniqueName: \"kubernetes.io/projected/c31cb131-c9c5-4ead-83f7-7a129c57561e-kube-api-access-z2rj8\") pod \"redhat-marketplace-2z92v\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.525720 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-catalog-content\") pod \"redhat-marketplace-2z92v\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.525804 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-utilities\") pod \"redhat-marketplace-2z92v\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.526859 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-utilities\") pod \"redhat-marketplace-2z92v\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.526876 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-catalog-content\") pod \"redhat-marketplace-2z92v\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.554062 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rj8\" (UniqueName: \"kubernetes.io/projected/c31cb131-c9c5-4ead-83f7-7a129c57561e-kube-api-access-z2rj8\") pod \"redhat-marketplace-2z92v\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.671534 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:48:51 crc kubenswrapper[4834]: I0121 15:48:51.953438 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z92v"] Jan 21 15:48:52 crc kubenswrapper[4834]: I0121 15:48:52.030762 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z92v" event={"ID":"c31cb131-c9c5-4ead-83f7-7a129c57561e","Type":"ContainerStarted","Data":"aa317233f2e935584c85960fd7453a34e77fbe2061b440a6bf33fc7ed1d7a271"} Jan 21 15:48:53 crc kubenswrapper[4834]: I0121 15:48:53.039624 4834 generic.go:334] "Generic (PLEG): container finished" podID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerID="baf523b4991c2e1053be9a09527cc8ac3e2bbc2a78e20d2c11e65015c712d19f" exitCode=0 Jan 21 15:48:53 crc kubenswrapper[4834]: I0121 15:48:53.039725 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z92v" event={"ID":"c31cb131-c9c5-4ead-83f7-7a129c57561e","Type":"ContainerDied","Data":"baf523b4991c2e1053be9a09527cc8ac3e2bbc2a78e20d2c11e65015c712d19f"} Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.058615 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z92v" event={"ID":"c31cb131-c9c5-4ead-83f7-7a129c57561e","Type":"ContainerStarted","Data":"7a64dea6a52a358552c8411b36064e6eff7efb6167cd00958c3c7f2ba7bb04a6"} Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.296158 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p27v7"] Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.298117 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.322113 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p27v7"] Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.378098 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-utilities\") pod \"certified-operators-p27v7\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.378181 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt466\" (UniqueName: \"kubernetes.io/projected/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-kube-api-access-jt466\") pod \"certified-operators-p27v7\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.378563 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-catalog-content\") pod \"certified-operators-p27v7\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.480511 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-catalog-content\") pod \"certified-operators-p27v7\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.480629 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-utilities\") pod \"certified-operators-p27v7\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.480883 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt466\" (UniqueName: \"kubernetes.io/projected/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-kube-api-access-jt466\") pod \"certified-operators-p27v7\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.481412 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-catalog-content\") pod \"certified-operators-p27v7\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.481476 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-utilities\") pod \"certified-operators-p27v7\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.507131 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt466\" (UniqueName: \"kubernetes.io/projected/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-kube-api-access-jt466\") pod \"certified-operators-p27v7\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.630593 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:48:54 crc kubenswrapper[4834]: I0121 15:48:54.966395 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p27v7"] Jan 21 15:48:55 crc kubenswrapper[4834]: I0121 15:48:55.094061 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p27v7" event={"ID":"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1","Type":"ContainerStarted","Data":"87b3766843d82171369d8264f7cec124aaae87921dd4f7cd04f9bc3acb9efa7b"} Jan 21 15:48:55 crc kubenswrapper[4834]: I0121 15:48:55.096895 4834 generic.go:334] "Generic (PLEG): container finished" podID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerID="7a64dea6a52a358552c8411b36064e6eff7efb6167cd00958c3c7f2ba7bb04a6" exitCode=0 Jan 21 15:48:55 crc kubenswrapper[4834]: I0121 15:48:55.096952 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z92v" event={"ID":"c31cb131-c9c5-4ead-83f7-7a129c57561e","Type":"ContainerDied","Data":"7a64dea6a52a358552c8411b36064e6eff7efb6167cd00958c3c7f2ba7bb04a6"} Jan 21 15:48:56 crc kubenswrapper[4834]: I0121 15:48:56.107086 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z92v" event={"ID":"c31cb131-c9c5-4ead-83f7-7a129c57561e","Type":"ContainerStarted","Data":"2be0fad07fa9b4f872a20b73b2a275f5a8821051ad238030fe46b2fb28c163f3"} Jan 21 15:48:56 crc kubenswrapper[4834]: I0121 15:48:56.108475 4834 generic.go:334] "Generic (PLEG): container finished" podID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerID="34f18f7755055fad791fd0ea399110aac5787aa2d1fac9c6118d945687bd7fae" exitCode=0 Jan 21 15:48:56 crc kubenswrapper[4834]: I0121 15:48:56.108513 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p27v7" event={"ID":"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1","Type":"ContainerDied","Data":"34f18f7755055fad791fd0ea399110aac5787aa2d1fac9c6118d945687bd7fae"} Jan 21 15:48:56 crc kubenswrapper[4834]: I0121 15:48:56.154473 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2z92v" podStartSLOduration=2.640593466 podStartE2EDuration="5.154440234s" podCreationTimestamp="2026-01-21 15:48:51 +0000 UTC" firstStartedPulling="2026-01-21 15:48:53.042652504 +0000 UTC m=+4679.017001549" lastFinishedPulling="2026-01-21 15:48:55.556499272 +0000 UTC m=+4681.530848317" observedRunningTime="2026-01-21 15:48:56.12963254 +0000 UTC m=+4682.103981605" watchObservedRunningTime="2026-01-21 15:48:56.154440234 +0000 UTC m=+4682.128789279" Jan 21 15:48:58 crc kubenswrapper[4834]: I0121 15:48:58.135256 4834 generic.go:334] "Generic (PLEG): container finished" podID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerID="ba401b37c236af1197a955816ddc70d273820a45121ea6dc92b8cdf0378e8823" exitCode=0 Jan 21 15:48:58 crc kubenswrapper[4834]: I0121 15:48:58.135863 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p27v7" event={"ID":"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1","Type":"ContainerDied","Data":"ba401b37c236af1197a955816ddc70d273820a45121ea6dc92b8cdf0378e8823"} Jan 21 15:49:01 crc kubenswrapper[4834]: I0121 15:49:01.166127 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p27v7" event={"ID":"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1","Type":"ContainerStarted","Data":"085459815c6d41e0f966db4fa2ce20a3dddcf94287869d9e1f694b0ae0b56c4e"} Jan 21 15:49:01 crc kubenswrapper[4834]: I0121 15:49:01.196595 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p27v7" podStartSLOduration=3.175070977 podStartE2EDuration="7.196569891s" podCreationTimestamp="2026-01-21 15:48:54 +0000 UTC" firstStartedPulling="2026-01-21 15:48:56.110514073 +0000 UTC m=+4682.084863118" lastFinishedPulling="2026-01-21 15:49:00.132012987 +0000 UTC m=+4686.106362032" observedRunningTime="2026-01-21 15:49:01.186802857 +0000 UTC m=+4687.161151902" watchObservedRunningTime="2026-01-21 15:49:01.196569891 +0000 UTC m=+4687.170918936" Jan 21 15:49:01 crc kubenswrapper[4834]: I0121 15:49:01.672081 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:49:01 crc kubenswrapper[4834]: I0121 15:49:01.672181 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:49:01 crc kubenswrapper[4834]: I0121 15:49:01.727125 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:49:02 crc kubenswrapper[4834]: I0121 15:49:02.215842 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:49:04 crc kubenswrapper[4834]: I0121 15:49:04.630711 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:49:04 crc kubenswrapper[4834]: I0121 15:49:04.631137 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:49:04 crc kubenswrapper[4834]: I0121 15:49:04.674411 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:49:05 crc kubenswrapper[4834]: I0121 15:49:05.245643 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:49:06 crc kubenswrapper[4834]: I0121 15:49:06.479554 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z92v"] Jan 21 15:49:06 crc kubenswrapper[4834]: I0121 15:49:06.479774 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2z92v" podUID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerName="registry-server" containerID="cri-o://2be0fad07fa9b4f872a20b73b2a275f5a8821051ad238030fe46b2fb28c163f3" gracePeriod=2 Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.220450 4834 generic.go:334] "Generic (PLEG): container finished" podID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerID="2be0fad07fa9b4f872a20b73b2a275f5a8821051ad238030fe46b2fb28c163f3" exitCode=0 Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.220522 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z92v" event={"ID":"c31cb131-c9c5-4ead-83f7-7a129c57561e","Type":"ContainerDied","Data":"2be0fad07fa9b4f872a20b73b2a275f5a8821051ad238030fe46b2fb28c163f3"} Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.468205 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.498657 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-utilities\") pod \"c31cb131-c9c5-4ead-83f7-7a129c57561e\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.498786 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2rj8\" (UniqueName: \"kubernetes.io/projected/c31cb131-c9c5-4ead-83f7-7a129c57561e-kube-api-access-z2rj8\") pod \"c31cb131-c9c5-4ead-83f7-7a129c57561e\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.498817 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-catalog-content\") pod \"c31cb131-c9c5-4ead-83f7-7a129c57561e\" (UID: \"c31cb131-c9c5-4ead-83f7-7a129c57561e\") " Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.499850 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-utilities" (OuterVolumeSpecName: "utilities") pod "c31cb131-c9c5-4ead-83f7-7a129c57561e" (UID: "c31cb131-c9c5-4ead-83f7-7a129c57561e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.506186 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31cb131-c9c5-4ead-83f7-7a129c57561e-kube-api-access-z2rj8" (OuterVolumeSpecName: "kube-api-access-z2rj8") pod "c31cb131-c9c5-4ead-83f7-7a129c57561e" (UID: "c31cb131-c9c5-4ead-83f7-7a129c57561e"). InnerVolumeSpecName "kube-api-access-z2rj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.523099 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c31cb131-c9c5-4ead-83f7-7a129c57561e" (UID: "c31cb131-c9c5-4ead-83f7-7a129c57561e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.600710 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2rj8\" (UniqueName: \"kubernetes.io/projected/c31cb131-c9c5-4ead-83f7-7a129c57561e-kube-api-access-z2rj8\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.600749 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:07 crc kubenswrapper[4834]: I0121 15:49:07.600763 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31cb131-c9c5-4ead-83f7-7a129c57561e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:08 crc kubenswrapper[4834]: I0121 15:49:08.232863 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2z92v" event={"ID":"c31cb131-c9c5-4ead-83f7-7a129c57561e","Type":"ContainerDied","Data":"aa317233f2e935584c85960fd7453a34e77fbe2061b440a6bf33fc7ed1d7a271"} Jan 21 15:49:08 crc kubenswrapper[4834]: I0121 15:49:08.232959 4834 scope.go:117] "RemoveContainer" containerID="2be0fad07fa9b4f872a20b73b2a275f5a8821051ad238030fe46b2fb28c163f3" Jan 21 15:49:08 crc kubenswrapper[4834]: I0121 15:49:08.232965 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2z92v" Jan 21 15:49:08 crc kubenswrapper[4834]: I0121 15:49:08.260444 4834 scope.go:117] "RemoveContainer" containerID="7a64dea6a52a358552c8411b36064e6eff7efb6167cd00958c3c7f2ba7bb04a6" Jan 21 15:49:08 crc kubenswrapper[4834]: I0121 15:49:08.276605 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z92v"] Jan 21 15:49:08 crc kubenswrapper[4834]: I0121 15:49:08.283878 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2z92v"] Jan 21 15:49:08 crc kubenswrapper[4834]: I0121 15:49:08.290897 4834 scope.go:117] "RemoveContainer" containerID="baf523b4991c2e1053be9a09527cc8ac3e2bbc2a78e20d2c11e65015c712d19f" Jan 21 15:49:08 crc kubenswrapper[4834]: I0121 15:49:08.335669 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31cb131-c9c5-4ead-83f7-7a129c57561e" path="/var/lib/kubelet/pods/c31cb131-c9c5-4ead-83f7-7a129c57561e/volumes" Jan 21 15:49:09 crc kubenswrapper[4834]: I0121 15:49:09.475172 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p27v7"] Jan 21 15:49:09 crc kubenswrapper[4834]: I0121 15:49:09.475775 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p27v7" podUID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerName="registry-server" containerID="cri-o://085459815c6d41e0f966db4fa2ce20a3dddcf94287869d9e1f694b0ae0b56c4e" gracePeriod=2 Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.258374 4834 generic.go:334] "Generic (PLEG): container finished" podID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerID="085459815c6d41e0f966db4fa2ce20a3dddcf94287869d9e1f694b0ae0b56c4e" exitCode=0 Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.258434 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p27v7" event={"ID":"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1","Type":"ContainerDied","Data":"085459815c6d41e0f966db4fa2ce20a3dddcf94287869d9e1f694b0ae0b56c4e"} Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.351361 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.440777 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-utilities\") pod \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.440871 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-catalog-content\") pod \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.440905 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt466\" (UniqueName: \"kubernetes.io/projected/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-kube-api-access-jt466\") pod \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\" (UID: \"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1\") " Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.442275 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-utilities" (OuterVolumeSpecName: "utilities") pod "f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" (UID: "f79e5f4c-2f57-4074-b7d9-7ce03662dfd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.448779 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-kube-api-access-jt466" (OuterVolumeSpecName: "kube-api-access-jt466") pod "f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" (UID: "f79e5f4c-2f57-4074-b7d9-7ce03662dfd1"). InnerVolumeSpecName "kube-api-access-jt466". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.490283 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" (UID: "f79e5f4c-2f57-4074-b7d9-7ce03662dfd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.542858 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.543601 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:10 crc kubenswrapper[4834]: I0121 15:49:10.543668 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt466\" (UniqueName: \"kubernetes.io/projected/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1-kube-api-access-jt466\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:11 crc kubenswrapper[4834]: I0121 15:49:11.271454 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p27v7" event={"ID":"f79e5f4c-2f57-4074-b7d9-7ce03662dfd1","Type":"ContainerDied","Data":"87b3766843d82171369d8264f7cec124aaae87921dd4f7cd04f9bc3acb9efa7b"} Jan 21 15:49:11 crc kubenswrapper[4834]: I0121 15:49:11.271517 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p27v7" Jan 21 15:49:11 crc kubenswrapper[4834]: I0121 15:49:11.271548 4834 scope.go:117] "RemoveContainer" containerID="085459815c6d41e0f966db4fa2ce20a3dddcf94287869d9e1f694b0ae0b56c4e" Jan 21 15:49:11 crc kubenswrapper[4834]: I0121 15:49:11.306010 4834 scope.go:117] "RemoveContainer" containerID="ba401b37c236af1197a955816ddc70d273820a45121ea6dc92b8cdf0378e8823" Jan 21 15:49:11 crc kubenswrapper[4834]: I0121 15:49:11.329201 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p27v7"] Jan 21 15:49:11 crc kubenswrapper[4834]: I0121 15:49:11.336916 4834 scope.go:117] "RemoveContainer" containerID="34f18f7755055fad791fd0ea399110aac5787aa2d1fac9c6118d945687bd7fae" Jan 21 15:49:11 crc kubenswrapper[4834]: I0121 15:49:11.338320 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p27v7"] Jan 21 15:49:12 crc kubenswrapper[4834]: I0121 15:49:12.334985 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" path="/var/lib/kubelet/pods/f79e5f4c-2f57-4074-b7d9-7ce03662dfd1/volumes" Jan 21 15:50:17 crc kubenswrapper[4834]: I0121 15:50:17.114709 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:50:17 crc kubenswrapper[4834]: I0121 15:50:17.115603 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.024592 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95587bc99-6k5xz"] Jan 21 15:50:31 crc kubenswrapper[4834]: E0121 15:50:31.026117 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerName="extract-utilities" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.026138 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerName="extract-utilities" Jan 21 15:50:31 crc kubenswrapper[4834]: E0121 15:50:31.026168 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerName="registry-server" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.026176 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerName="registry-server" Jan 21 15:50:31 crc kubenswrapper[4834]: E0121 15:50:31.026190 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerName="extract-utilities" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.026213 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerName="extract-utilities" Jan 21 15:50:31 crc kubenswrapper[4834]: E0121 15:50:31.026227 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerName="extract-content" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.026234 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerName="extract-content" Jan 21 15:50:31 crc kubenswrapper[4834]: E0121 15:50:31.026246 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerName="registry-server" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.026253 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerName="registry-server" Jan 21 15:50:31 crc kubenswrapper[4834]: E0121 15:50:31.026270 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerName="extract-content" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.026277 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerName="extract-content" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.026454 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79e5f4c-2f57-4074-b7d9-7ce03662dfd1" containerName="registry-server" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.026474 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31cb131-c9c5-4ead-83f7-7a129c57561e" containerName="registry-server" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.027582 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.038369 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.038683 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.038881 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.039072 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qv7j8" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.039085 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.040957 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-6k5xz"] Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.162628 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqv65\" (UniqueName: \"kubernetes.io/projected/af748b2f-1c07-4c0a-949f-4b9162f4ac71-kube-api-access-mqv65\") pod \"dnsmasq-dns-95587bc99-6k5xz\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.162712 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-config\") pod \"dnsmasq-dns-95587bc99-6k5xz\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.162754 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-dns-svc\") pod \"dnsmasq-dns-95587bc99-6k5xz\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.264465 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-config\") pod \"dnsmasq-dns-95587bc99-6k5xz\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.264579 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-dns-svc\") pod \"dnsmasq-dns-95587bc99-6k5xz\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.264631 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqv65\" (UniqueName: \"kubernetes.io/projected/af748b2f-1c07-4c0a-949f-4b9162f4ac71-kube-api-access-mqv65\") pod \"dnsmasq-dns-95587bc99-6k5xz\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.265792 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-config\") pod \"dnsmasq-dns-95587bc99-6k5xz\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.265949 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-dns-svc\") pod \"dnsmasq-dns-95587bc99-6k5xz\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.293658 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqv65\" (UniqueName: \"kubernetes.io/projected/af748b2f-1c07-4c0a-949f-4b9162f4ac71-kube-api-access-mqv65\") pod \"dnsmasq-dns-95587bc99-6k5xz\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.348371 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.391147 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-qvwmd"] Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.402814 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.443007 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-qvwmd"] Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.483166 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-config\") pod \"dnsmasq-dns-5d79f765b5-qvwmd\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.483367 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbvt9\" (UniqueName: \"kubernetes.io/projected/7940c21a-b6dd-4f1b-a744-d36ed1478875-kube-api-access-lbvt9\") pod \"dnsmasq-dns-5d79f765b5-qvwmd\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.483435 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-qvwmd\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.584818 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-config\") pod \"dnsmasq-dns-5d79f765b5-qvwmd\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.584972 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbvt9\" (UniqueName: \"kubernetes.io/projected/7940c21a-b6dd-4f1b-a744-d36ed1478875-kube-api-access-lbvt9\") pod \"dnsmasq-dns-5d79f765b5-qvwmd\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.585031 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-qvwmd\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.586376 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-qvwmd\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.587251 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-config\") pod \"dnsmasq-dns-5d79f765b5-qvwmd\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.631335 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbvt9\" (UniqueName: \"kubernetes.io/projected/7940c21a-b6dd-4f1b-a744-d36ed1478875-kube-api-access-lbvt9\") pod \"dnsmasq-dns-5d79f765b5-qvwmd\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:31 crc kubenswrapper[4834]: I0121 15:50:31.766444 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.110189 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-qvwmd"] Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.140192 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-6k5xz"] Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.181024 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.183996 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.191670 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.192648 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.192688 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.192753 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.192831 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.192850 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-b2rhv" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.308688 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.309176 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.309388 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.309471 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5lkz\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-kube-api-access-w5lkz\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.309593 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.309634 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.309680 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bcdd395-4d82-4632-b6ae-deb0da55378e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.309753 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.309785 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bcdd395-4d82-4632-b6ae-deb0da55378e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.411325 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.411379 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.411417 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bcdd395-4d82-4632-b6ae-deb0da55378e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.411446 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.411468 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bcdd395-4d82-4632-b6ae-deb0da55378e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.411534 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.411569 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.411613 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.411637 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5lkz\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-kube-api-access-w5lkz\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.413676 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.413960 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.414286 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.414293 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.420064 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bcdd395-4d82-4632-b6ae-deb0da55378e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.422007 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.422046 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7348a3a4c87ad15691f070cedb4f3e77a1f723dc7eb0f4e1690a0ddd1e340792/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.422164 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.429728 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bcdd395-4d82-4632-b6ae-deb0da55378e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.435241 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5lkz\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-kube-api-access-w5lkz\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.459538 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") pod \"rabbitmq-server-0\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.533182 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.597877 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.599156 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.603826 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.604279 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.604396 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.604644 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.604869 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rpl9f" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.627511 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.716402 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.716879 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.716954 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.716988 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.717026 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.717047 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bng2\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-kube-api-access-9bng2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.717076 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.717232 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.717373 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.818598 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.818661 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bng2\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-kube-api-access-9bng2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.818691 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.818716 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.818756 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.818800 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.818850 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.818900 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.818960 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.819277 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.819793 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.820702 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.821885 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.827675 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.828080 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.828162 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4b512534cc5a7f21b33807f7ee05096a151cc01b256b947edd979789f15adf56/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.828075 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.829479 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.844760 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bng2\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-kube-api-access-9bng2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.867964 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") pod \"rabbitmq-cell1-server-0\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.906109 4834 generic.go:334] "Generic (PLEG): container finished" podID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" containerID="cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b" exitCode=0 Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.906176 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" event={"ID":"af748b2f-1c07-4c0a-949f-4b9162f4ac71","Type":"ContainerDied","Data":"cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b"} Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.906251 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" event={"ID":"af748b2f-1c07-4c0a-949f-4b9162f4ac71","Type":"ContainerStarted","Data":"0b7eaf981b822447b669c2a03fcd1aa57a9ae14258e69311d08fc3495b696bf7"} Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.908873 4834 generic.go:334] "Generic (PLEG): container finished" podID="7940c21a-b6dd-4f1b-a744-d36ed1478875" containerID="adb67ffbc84fa8163ca1ba426942a9db274c356395fab133474bf321f9ef74d1" exitCode=0 Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.908912 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" event={"ID":"7940c21a-b6dd-4f1b-a744-d36ed1478875","Type":"ContainerDied","Data":"adb67ffbc84fa8163ca1ba426942a9db274c356395fab133474bf321f9ef74d1"} Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.908953 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" event={"ID":"7940c21a-b6dd-4f1b-a744-d36ed1478875","Type":"ContainerStarted","Data":"e9d1e93ec120e15cdbd7280b904a2a766d38c441266844e54ca34d3eaf2dc7ac"} Jan 21 15:50:32 crc kubenswrapper[4834]: I0121 15:50:32.950002 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.028584 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:50:33 crc kubenswrapper[4834]: W0121 15:50:33.054217 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bcdd395_4d82_4632_b6ae_deb0da55378e.slice/crio-e5b1d54f820b90cffa11ddf280148ae14d705ea3371e2fbb98c9cb8445b48c0f WatchSource:0}: Error finding container e5b1d54f820b90cffa11ddf280148ae14d705ea3371e2fbb98c9cb8445b48c0f: Status 404 returned error can't find the container with id e5b1d54f820b90cffa11ddf280148ae14d705ea3371e2fbb98c9cb8445b48c0f Jan 21 15:50:33 crc kubenswrapper[4834]: E0121 15:50:33.204240 4834 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 21 15:50:33 crc kubenswrapper[4834]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/af748b2f-1c07-4c0a-949f-4b9162f4ac71/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 21 15:50:33 crc kubenswrapper[4834]: > podSandboxID="0b7eaf981b822447b669c2a03fcd1aa57a9ae14258e69311d08fc3495b696bf7" Jan 21 15:50:33 crc kubenswrapper[4834]: E0121 15:50:33.204911 4834 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 15:50:33 crc kubenswrapper[4834]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqv65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95587bc99-6k5xz_openstack(af748b2f-1c07-4c0a-949f-4b9162f4ac71): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/af748b2f-1c07-4c0a-949f-4b9162f4ac71/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 21 15:50:33 crc kubenswrapper[4834]: > logger="UnhandledError" Jan 21 15:50:33 crc kubenswrapper[4834]: E0121 15:50:33.207000 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/af748b2f-1c07-4c0a-949f-4b9162f4ac71/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" podUID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.285677 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.286725 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.289393 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.297820 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-h8bpr" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.304947 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.327706 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ed90e2cf-be92-46d4-b3f0-ef730606de1c-kolla-config\") pod \"memcached-0\" (UID: \"ed90e2cf-be92-46d4-b3f0-ef730606de1c\") " pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.327844 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tljt\" (UniqueName: \"kubernetes.io/projected/ed90e2cf-be92-46d4-b3f0-ef730606de1c-kube-api-access-6tljt\") pod \"memcached-0\" (UID: \"ed90e2cf-be92-46d4-b3f0-ef730606de1c\") " pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.327998 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed90e2cf-be92-46d4-b3f0-ef730606de1c-config-data\") pod \"memcached-0\" (UID: \"ed90e2cf-be92-46d4-b3f0-ef730606de1c\") " pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.429776 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed90e2cf-be92-46d4-b3f0-ef730606de1c-config-data\") pod \"memcached-0\" (UID: \"ed90e2cf-be92-46d4-b3f0-ef730606de1c\") " pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.429913 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ed90e2cf-be92-46d4-b3f0-ef730606de1c-kolla-config\") pod \"memcached-0\" (UID: \"ed90e2cf-be92-46d4-b3f0-ef730606de1c\") " pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.430004 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tljt\" (UniqueName: \"kubernetes.io/projected/ed90e2cf-be92-46d4-b3f0-ef730606de1c-kube-api-access-6tljt\") pod \"memcached-0\" (UID: \"ed90e2cf-be92-46d4-b3f0-ef730606de1c\") " pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.431143 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed90e2cf-be92-46d4-b3f0-ef730606de1c-config-data\") pod \"memcached-0\" (UID: \"ed90e2cf-be92-46d4-b3f0-ef730606de1c\") " pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.431596 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ed90e2cf-be92-46d4-b3f0-ef730606de1c-kolla-config\") pod \"memcached-0\" (UID: \"ed90e2cf-be92-46d4-b3f0-ef730606de1c\") " pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.455687 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tljt\" (UniqueName: \"kubernetes.io/projected/ed90e2cf-be92-46d4-b3f0-ef730606de1c-kube-api-access-6tljt\") pod \"memcached-0\" (UID: \"ed90e2cf-be92-46d4-b3f0-ef730606de1c\") " pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.498230 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.499832 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.505535 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tcxrs" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.505638 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.505936 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.506253 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.512213 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.516602 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.535028 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.632817 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.632902 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ec398a7-f11f-4562-acb6-c028f9c5b2f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ec398a7-f11f-4562-acb6-c028f9c5b2f3\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.632945 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.632960 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.632982 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.632997 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjvc5\" (UniqueName: \"kubernetes.io/projected/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-kube-api-access-vjvc5\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.633235 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.633383 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.658094 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.735017 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ec398a7-f11f-4562-acb6-c028f9c5b2f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ec398a7-f11f-4562-acb6-c028f9c5b2f3\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.735099 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.735121 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.735144 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.735164 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjvc5\" (UniqueName: \"kubernetes.io/projected/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-kube-api-access-vjvc5\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.735229 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.735284 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.735317 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.736405 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.736668 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-config-data-default\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.736807 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-kolla-config\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.737765 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.739175 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.739304 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.739336 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ec398a7-f11f-4562-acb6-c028f9c5b2f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ec398a7-f11f-4562-acb6-c028f9c5b2f3\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb78c5fd20f0c1530ff7d07a63cfb027d2d83600470336c97b6ec61324d98def/globalmount\"" pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.740141 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.755514 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjvc5\" (UniqueName: \"kubernetes.io/projected/eb09e7ff-f752-4f08-adfb-8bdee7a815fd-kube-api-access-vjvc5\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.796989 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ec398a7-f11f-4562-acb6-c028f9c5b2f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ec398a7-f11f-4562-acb6-c028f9c5b2f3\") pod \"openstack-galera-0\" (UID: \"eb09e7ff-f752-4f08-adfb-8bdee7a815fd\") " pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.822021 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.922724 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7","Type":"ContainerStarted","Data":"dc1866e8fd8ebab884a01251a5d4d144b625331f55f601c5866fb7fda2e59569"} Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.926196 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" event={"ID":"7940c21a-b6dd-4f1b-a744-d36ed1478875","Type":"ContainerStarted","Data":"f1471b709271c1dbffd5ff03a6478079e7eedc9507798f8ff5824e9bde65e1e8"} Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.926453 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.927520 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bcdd395-4d82-4632-b6ae-deb0da55378e","Type":"ContainerStarted","Data":"e5b1d54f820b90cffa11ddf280148ae14d705ea3371e2fbb98c9cb8445b48c0f"} Jan 21 15:50:33 crc kubenswrapper[4834]: I0121 15:50:33.946047 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" podStartSLOduration=2.9459936620000002 podStartE2EDuration="2.945993662s" podCreationTimestamp="2026-01-21 15:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:33.944101163 +0000 UTC m=+4779.918450218" watchObservedRunningTime="2026-01-21 15:50:33.945993662 +0000 UTC m=+4779.920342717" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.113082 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 15:50:34 crc kubenswrapper[4834]: W0121 15:50:34.119204 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded90e2cf_be92_46d4_b3f0_ef730606de1c.slice/crio-16086ed2e3dd96d7fd66b1140211e378c49371f425ef1da05f04ae0d10c1a811 WatchSource:0}: Error finding container 16086ed2e3dd96d7fd66b1140211e378c49371f425ef1da05f04ae0d10c1a811: Status 404 returned error can't find the container with id 16086ed2e3dd96d7fd66b1140211e378c49371f425ef1da05f04ae0d10c1a811 Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.267620 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 15:50:34 crc kubenswrapper[4834]: W0121 15:50:34.277564 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb09e7ff_f752_4f08_adfb_8bdee7a815fd.slice/crio-8d5d15c9c099e33b33d56af92f69b540b39a5dd963fe561a04a8dd5753308cc1 WatchSource:0}: Error finding container 8d5d15c9c099e33b33d56af92f69b540b39a5dd963fe561a04a8dd5753308cc1: Status 404 returned error can't find the container with id 8d5d15c9c099e33b33d56af92f69b540b39a5dd963fe561a04a8dd5753308cc1 Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.495579 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.497506 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.507112 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.509889 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.510327 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.510546 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-2n85c" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.528333 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.666762 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4fcc39f6-a0b5-404d-a8c6-329408e95823-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.666830 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fcc39f6-a0b5-404d-a8c6-329408e95823-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.666886 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4fcc39f6-a0b5-404d-a8c6-329408e95823-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.666941 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fcc39f6-a0b5-404d-a8c6-329408e95823-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.666971 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4fcc39f6-a0b5-404d-a8c6-329408e95823-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.667002 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fcc39f6-a0b5-404d-a8c6-329408e95823-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.667027 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-87fabb85-7ddb-4339-83ab-59985833495c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87fabb85-7ddb-4339-83ab-59985833495c\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.667069 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84rz\" (UniqueName: \"kubernetes.io/projected/4fcc39f6-a0b5-404d-a8c6-329408e95823-kube-api-access-v84rz\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.768837 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4fcc39f6-a0b5-404d-a8c6-329408e95823-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.769031 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fcc39f6-a0b5-404d-a8c6-329408e95823-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.770672 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4fcc39f6-a0b5-404d-a8c6-329408e95823-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.770720 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fcc39f6-a0b5-404d-a8c6-329408e95823-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.770742 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-87fabb85-7ddb-4339-83ab-59985833495c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87fabb85-7ddb-4339-83ab-59985833495c\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.770769 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v84rz\" (UniqueName: \"kubernetes.io/projected/4fcc39f6-a0b5-404d-a8c6-329408e95823-kube-api-access-v84rz\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.770812 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4fcc39f6-a0b5-404d-a8c6-329408e95823-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.770832 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fcc39f6-a0b5-404d-a8c6-329408e95823-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.769865 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4fcc39f6-a0b5-404d-a8c6-329408e95823-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.773034 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4fcc39f6-a0b5-404d-a8c6-329408e95823-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.773066 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4fcc39f6-a0b5-404d-a8c6-329408e95823-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.773229 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fcc39f6-a0b5-404d-a8c6-329408e95823-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.776067 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fcc39f6-a0b5-404d-a8c6-329408e95823-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.776651 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fcc39f6-a0b5-404d-a8c6-329408e95823-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.777119 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.777147 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-87fabb85-7ddb-4339-83ab-59985833495c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87fabb85-7ddb-4339-83ab-59985833495c\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/31fc525617facc20afce68d8fad5f1cea7d63856b6ff6e5ccaa7500e574fcb12/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.796469 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84rz\" (UniqueName: \"kubernetes.io/projected/4fcc39f6-a0b5-404d-a8c6-329408e95823-kube-api-access-v84rz\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.913314 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-87fabb85-7ddb-4339-83ab-59985833495c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87fabb85-7ddb-4339-83ab-59985833495c\") pod \"openstack-cell1-galera-0\" (UID: \"4fcc39f6-a0b5-404d-a8c6-329408e95823\") " pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.957008 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ed90e2cf-be92-46d4-b3f0-ef730606de1c","Type":"ContainerStarted","Data":"ef0d800130e7572cc0a8e3d892e955ffbed454077e758a5c95017673b49f26a0"} Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.957098 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ed90e2cf-be92-46d4-b3f0-ef730606de1c","Type":"ContainerStarted","Data":"16086ed2e3dd96d7fd66b1140211e378c49371f425ef1da05f04ae0d10c1a811"} Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.957322 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.962058 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bcdd395-4d82-4632-b6ae-deb0da55378e","Type":"ContainerStarted","Data":"ffde81093f28e04236f7a40063777aaa059e0e677f5062a18aa95cca36e50d7f"} Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.964226 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb09e7ff-f752-4f08-adfb-8bdee7a815fd","Type":"ContainerStarted","Data":"8d5d15c9c099e33b33d56af92f69b540b39a5dd963fe561a04a8dd5753308cc1"} Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.966164 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" event={"ID":"af748b2f-1c07-4c0a-949f-4b9162f4ac71","Type":"ContainerStarted","Data":"9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e"} Jan 21 15:50:34 crc kubenswrapper[4834]: I0121 15:50:34.981455 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.981420579 podStartE2EDuration="1.981420579s" podCreationTimestamp="2026-01-21 15:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:34.975310358 +0000 UTC m=+4780.949659403" watchObservedRunningTime="2026-01-21 15:50:34.981420579 +0000 UTC m=+4780.955769624" Jan 21 15:50:35 crc kubenswrapper[4834]: I0121 15:50:35.057820 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" podStartSLOduration=5.057784701 podStartE2EDuration="5.057784701s" podCreationTimestamp="2026-01-21 15:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:35.051190686 +0000 UTC m=+4781.025539731" watchObservedRunningTime="2026-01-21 15:50:35.057784701 +0000 UTC m=+4781.032133746" Jan 21 15:50:35 crc kubenswrapper[4834]: I0121 15:50:35.187801 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:35 crc kubenswrapper[4834]: I0121 15:50:35.637461 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 15:50:35 crc kubenswrapper[4834]: I0121 15:50:35.976859 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb09e7ff-f752-4f08-adfb-8bdee7a815fd","Type":"ContainerStarted","Data":"fef73f04ff601c4ace7194ddfd6d44291eb08fe005aa8c849f28bd62d6efb237"} Jan 21 15:50:35 crc kubenswrapper[4834]: I0121 15:50:35.978434 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7","Type":"ContainerStarted","Data":"c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028"} Jan 21 15:50:36 crc kubenswrapper[4834]: W0121 15:50:36.298411 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fcc39f6_a0b5_404d_a8c6_329408e95823.slice/crio-aa0bec0fedee5af6112945ed500211f13c4affeec4d47dbf2be243c5b8942cce WatchSource:0}: Error finding container aa0bec0fedee5af6112945ed500211f13c4affeec4d47dbf2be243c5b8942cce: Status 404 returned error can't find the container with id aa0bec0fedee5af6112945ed500211f13c4affeec4d47dbf2be243c5b8942cce Jan 21 15:50:36 crc kubenswrapper[4834]: I0121 15:50:36.349371 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:36 crc kubenswrapper[4834]: I0121 15:50:36.989593 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4fcc39f6-a0b5-404d-a8c6-329408e95823","Type":"ContainerStarted","Data":"66d6ebef9b963c7d74255714959a199d6a9b797b94a84d59e88260c555e8af85"} Jan 21 15:50:36 crc kubenswrapper[4834]: I0121 15:50:36.990074 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4fcc39f6-a0b5-404d-a8c6-329408e95823","Type":"ContainerStarted","Data":"aa0bec0fedee5af6112945ed500211f13c4affeec4d47dbf2be243c5b8942cce"} Jan 21 15:50:40 crc kubenswrapper[4834]: I0121 15:50:40.017552 4834 generic.go:334] "Generic (PLEG): container finished" podID="eb09e7ff-f752-4f08-adfb-8bdee7a815fd" containerID="fef73f04ff601c4ace7194ddfd6d44291eb08fe005aa8c849f28bd62d6efb237" exitCode=0 Jan 21 15:50:40 crc kubenswrapper[4834]: I0121 15:50:40.018140 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb09e7ff-f752-4f08-adfb-8bdee7a815fd","Type":"ContainerDied","Data":"fef73f04ff601c4ace7194ddfd6d44291eb08fe005aa8c849f28bd62d6efb237"} Jan 21 15:50:41 crc kubenswrapper[4834]: I0121 15:50:41.030573 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eb09e7ff-f752-4f08-adfb-8bdee7a815fd","Type":"ContainerStarted","Data":"2e54f79d61c3473c83a57556d30874bc5320726accf1a5733ba7aa10b79d537b"} Jan 21 15:50:41 crc kubenswrapper[4834]: I0121 15:50:41.032589 4834 generic.go:334] "Generic (PLEG): container finished" podID="4fcc39f6-a0b5-404d-a8c6-329408e95823" containerID="66d6ebef9b963c7d74255714959a199d6a9b797b94a84d59e88260c555e8af85" exitCode=0 Jan 21 15:50:41 crc kubenswrapper[4834]: I0121 15:50:41.032646 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4fcc39f6-a0b5-404d-a8c6-329408e95823","Type":"ContainerDied","Data":"66d6ebef9b963c7d74255714959a199d6a9b797b94a84d59e88260c555e8af85"} Jan 21 15:50:41 crc kubenswrapper[4834]: I0121 15:50:41.066636 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.06660926 podStartE2EDuration="9.06660926s" podCreationTimestamp="2026-01-21 15:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:41.064170374 +0000 UTC m=+4787.038519439" watchObservedRunningTime="2026-01-21 15:50:41.06660926 +0000 UTC m=+4787.040958305" Jan 21 15:50:41 crc kubenswrapper[4834]: I0121 15:50:41.351133 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:41 crc kubenswrapper[4834]: I0121 15:50:41.769103 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:50:41 crc kubenswrapper[4834]: I0121 15:50:41.848732 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-6k5xz"] Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.043109 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4fcc39f6-a0b5-404d-a8c6-329408e95823","Type":"ContainerStarted","Data":"13923144d18a7ce6afb31152333c545705dc4775d2d8f254bfbe6cbb6a7ec850"} Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.043252 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" podUID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" containerName="dnsmasq-dns" containerID="cri-o://9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e" gracePeriod=10 Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.075529 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.075506068 podStartE2EDuration="9.075506068s" podCreationTimestamp="2026-01-21 15:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:42.070619015 +0000 UTC m=+4788.044968100" watchObservedRunningTime="2026-01-21 15:50:42.075506068 +0000 UTC m=+4788.049855113" Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.476238 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.515701 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqv65\" (UniqueName: \"kubernetes.io/projected/af748b2f-1c07-4c0a-949f-4b9162f4ac71-kube-api-access-mqv65\") pod \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.515893 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-dns-svc\") pod \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.515957 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-config\") pod \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\" (UID: \"af748b2f-1c07-4c0a-949f-4b9162f4ac71\") " Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.524610 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af748b2f-1c07-4c0a-949f-4b9162f4ac71-kube-api-access-mqv65" (OuterVolumeSpecName: "kube-api-access-mqv65") pod "af748b2f-1c07-4c0a-949f-4b9162f4ac71" (UID: "af748b2f-1c07-4c0a-949f-4b9162f4ac71"). InnerVolumeSpecName "kube-api-access-mqv65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.559437 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af748b2f-1c07-4c0a-949f-4b9162f4ac71" (UID: "af748b2f-1c07-4c0a-949f-4b9162f4ac71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.559438 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-config" (OuterVolumeSpecName: "config") pod "af748b2f-1c07-4c0a-949f-4b9162f4ac71" (UID: "af748b2f-1c07-4c0a-949f-4b9162f4ac71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.618204 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.618246 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af748b2f-1c07-4c0a-949f-4b9162f4ac71-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:42 crc kubenswrapper[4834]: I0121 15:50:42.618257 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqv65\" (UniqueName: \"kubernetes.io/projected/af748b2f-1c07-4c0a-949f-4b9162f4ac71-kube-api-access-mqv65\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.058604 4834 generic.go:334] "Generic (PLEG): container finished" podID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" containerID="9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e" exitCode=0 Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.058662 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" event={"ID":"af748b2f-1c07-4c0a-949f-4b9162f4ac71","Type":"ContainerDied","Data":"9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e"} Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.058691 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.058727 4834 scope.go:117] "RemoveContainer" containerID="9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.058707 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-6k5xz" event={"ID":"af748b2f-1c07-4c0a-949f-4b9162f4ac71","Type":"ContainerDied","Data":"0b7eaf981b822447b669c2a03fcd1aa57a9ae14258e69311d08fc3495b696bf7"} Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.083639 4834 scope.go:117] "RemoveContainer" containerID="cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.108435 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-6k5xz"] Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.116173 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-6k5xz"] Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.130518 4834 scope.go:117] "RemoveContainer" containerID="9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e" Jan 21 15:50:43 crc kubenswrapper[4834]: E0121 15:50:43.130919 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e\": container with ID starting with 9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e not found: ID does not exist" containerID="9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.130974 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e"} err="failed to get container status \"9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e\": rpc error: code = NotFound desc = could not find container \"9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e\": container with ID starting with 9e24d7fae617cdb838808f76872760ff3a86df187ea8a0f0112c49466139f18e not found: ID does not exist" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.131010 4834 scope.go:117] "RemoveContainer" containerID="cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b" Jan 21 15:50:43 crc kubenswrapper[4834]: E0121 15:50:43.131270 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b\": container with ID starting with cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b not found: ID does not exist" containerID="cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.131300 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b"} err="failed to get container status \"cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b\": rpc error: code = NotFound desc = could not find container \"cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b\": container with ID starting with cebacee37c26e719b371aac3160f0fc9ad651806ebefe30b4a9283a84218117b not found: ID does not exist" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.660704 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.825164 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 15:50:43 crc kubenswrapper[4834]: I0121 15:50:43.825222 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 15:50:44 crc kubenswrapper[4834]: I0121 15:50:44.336825 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" path="/var/lib/kubelet/pods/af748b2f-1c07-4c0a-949f-4b9162f4ac71/volumes" Jan 21 15:50:45 crc kubenswrapper[4834]: I0121 15:50:45.188222 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:45 crc kubenswrapper[4834]: I0121 15:50:45.188753 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:46 crc kubenswrapper[4834]: I0121 15:50:46.133643 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 15:50:46 crc kubenswrapper[4834]: I0121 15:50:46.217047 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 15:50:47 crc kubenswrapper[4834]: I0121 15:50:47.116476 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:50:47 crc kubenswrapper[4834]: I0121 15:50:47.117104 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:47 crc kubenswrapper[4834]: I0121 15:50:47.478212 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:47 crc kubenswrapper[4834]: I0121 15:50:47.539198 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.442458 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2fzbh"] Jan 21 15:50:52 crc kubenswrapper[4834]: E0121 15:50:52.443223 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" containerName="init" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.443241 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" containerName="init" Jan 21 15:50:52 crc kubenswrapper[4834]: E0121 15:50:52.443271 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" containerName="dnsmasq-dns" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.443279 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" containerName="dnsmasq-dns" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.443480 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="af748b2f-1c07-4c0a-949f-4b9162f4ac71" containerName="dnsmasq-dns" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.444179 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.449988 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2fzbh"] Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.454524 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.568186 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk9dz\" (UniqueName: \"kubernetes.io/projected/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-kube-api-access-xk9dz\") pod \"root-account-create-update-2fzbh\" (UID: \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\") " pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.568284 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-operator-scripts\") pod \"root-account-create-update-2fzbh\" (UID: \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\") " pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.669997 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk9dz\" (UniqueName: \"kubernetes.io/projected/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-kube-api-access-xk9dz\") pod \"root-account-create-update-2fzbh\" (UID: \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\") " pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.670079 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-operator-scripts\") pod \"root-account-create-update-2fzbh\" (UID: \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\") " pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.671289 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-operator-scripts\") pod \"root-account-create-update-2fzbh\" (UID: \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\") " pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.697679 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk9dz\" (UniqueName: \"kubernetes.io/projected/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-kube-api-access-xk9dz\") pod \"root-account-create-update-2fzbh\" (UID: \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\") " pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:52 crc kubenswrapper[4834]: I0121 15:50:52.774420 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:53 crc kubenswrapper[4834]: I0121 15:50:53.029803 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2fzbh"] Jan 21 15:50:53 crc kubenswrapper[4834]: W0121 15:50:53.036073 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7d5e53_d9e9_449e_8cda_6ac1dc3f7693.slice/crio-66f5b295966bdcfeaa1dc3f74499bf0dc029b65a920f3d14da7e53723556be55 WatchSource:0}: Error finding container 66f5b295966bdcfeaa1dc3f74499bf0dc029b65a920f3d14da7e53723556be55: Status 404 returned error can't find the container with id 66f5b295966bdcfeaa1dc3f74499bf0dc029b65a920f3d14da7e53723556be55 Jan 21 15:50:53 crc kubenswrapper[4834]: I0121 15:50:53.126308 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2fzbh" event={"ID":"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693","Type":"ContainerStarted","Data":"66f5b295966bdcfeaa1dc3f74499bf0dc029b65a920f3d14da7e53723556be55"} Jan 21 15:50:54 crc kubenswrapper[4834]: I0121 15:50:54.134811 4834 generic.go:334] "Generic (PLEG): container finished" podID="fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693" containerID="5021598e476a84b0b49220f8124f5ac19753b9c68f47d844be94ded9929c19ec" exitCode=0 Jan 21 15:50:54 crc kubenswrapper[4834]: I0121 15:50:54.134896 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2fzbh" event={"ID":"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693","Type":"ContainerDied","Data":"5021598e476a84b0b49220f8124f5ac19753b9c68f47d844be94ded9929c19ec"} Jan 21 15:50:55 crc kubenswrapper[4834]: I0121 15:50:55.476840 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:55 crc kubenswrapper[4834]: I0121 15:50:55.615925 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk9dz\" (UniqueName: \"kubernetes.io/projected/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-kube-api-access-xk9dz\") pod \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\" (UID: \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\") " Jan 21 15:50:55 crc kubenswrapper[4834]: I0121 15:50:55.616017 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-operator-scripts\") pod \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\" (UID: \"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693\") " Jan 21 15:50:55 crc kubenswrapper[4834]: I0121 15:50:55.616888 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693" (UID: "fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:55 crc kubenswrapper[4834]: I0121 15:50:55.621329 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-kube-api-access-xk9dz" (OuterVolumeSpecName: "kube-api-access-xk9dz") pod "fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693" (UID: "fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693"). InnerVolumeSpecName "kube-api-access-xk9dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:55 crc kubenswrapper[4834]: I0121 15:50:55.718120 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk9dz\" (UniqueName: \"kubernetes.io/projected/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-kube-api-access-xk9dz\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:55 crc kubenswrapper[4834]: I0121 15:50:55.718507 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:56 crc kubenswrapper[4834]: I0121 15:50:56.150881 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2fzbh" Jan 21 15:50:56 crc kubenswrapper[4834]: I0121 15:50:56.150918 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2fzbh" event={"ID":"fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693","Type":"ContainerDied","Data":"66f5b295966bdcfeaa1dc3f74499bf0dc029b65a920f3d14da7e53723556be55"} Jan 21 15:50:56 crc kubenswrapper[4834]: I0121 15:50:56.150993 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f5b295966bdcfeaa1dc3f74499bf0dc029b65a920f3d14da7e53723556be55" Jan 21 15:50:58 crc kubenswrapper[4834]: I0121 15:50:58.456843 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2fzbh"] Jan 21 15:50:58 crc kubenswrapper[4834]: I0121 15:50:58.464009 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2fzbh"] Jan 21 15:51:00 crc kubenswrapper[4834]: I0121 15:51:00.338422 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693" path="/var/lib/kubelet/pods/fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693/volumes" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.467330 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8zx8w"] Jan 21 15:51:03 crc kubenswrapper[4834]: E0121 15:51:03.468538 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693" containerName="mariadb-account-create-update" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.468562 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693" containerName="mariadb-account-create-update" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.468784 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7d5e53-d9e9-449e-8cda-6ac1dc3f7693" containerName="mariadb-account-create-update" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.469689 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.472856 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.476815 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8zx8w"] Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.558072 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-operator-scripts\") pod \"root-account-create-update-8zx8w\" (UID: \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\") " pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.558445 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh7hk\" (UniqueName: \"kubernetes.io/projected/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-kube-api-access-bh7hk\") pod \"root-account-create-update-8zx8w\" (UID: \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\") " pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.661140 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh7hk\" (UniqueName: \"kubernetes.io/projected/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-kube-api-access-bh7hk\") pod \"root-account-create-update-8zx8w\" (UID: \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\") " pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.661558 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-operator-scripts\") pod \"root-account-create-update-8zx8w\" (UID: \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\") " pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.662727 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-operator-scripts\") pod \"root-account-create-update-8zx8w\" (UID: \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\") " pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.682706 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh7hk\" (UniqueName: \"kubernetes.io/projected/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-kube-api-access-bh7hk\") pod \"root-account-create-update-8zx8w\" (UID: \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\") " pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:03 crc kubenswrapper[4834]: I0121 15:51:03.795429 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:04 crc kubenswrapper[4834]: I0121 15:51:04.047760 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8zx8w"] Jan 21 15:51:04 crc kubenswrapper[4834]: I0121 15:51:04.220685 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8zx8w" event={"ID":"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd","Type":"ContainerStarted","Data":"9230703ef11ad257e766d93e92ab568a9daac6918d9783f0325a4b0ccdfa48cf"} Jan 21 15:51:05 crc kubenswrapper[4834]: I0121 15:51:05.231578 4834 generic.go:334] "Generic (PLEG): container finished" podID="e3421b9b-83cb-488d-8b6c-f1e6d7303ccd" containerID="7bc2aedaa589d1b68d5e229ae9b7802a03b4cde0d8d7bd68ad58aa3e602d02b1" exitCode=0 Jan 21 15:51:05 crc kubenswrapper[4834]: I0121 15:51:05.231646 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8zx8w" event={"ID":"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd","Type":"ContainerDied","Data":"7bc2aedaa589d1b68d5e229ae9b7802a03b4cde0d8d7bd68ad58aa3e602d02b1"} Jan 21 15:51:06 crc kubenswrapper[4834]: I0121 15:51:06.532288 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:06 crc kubenswrapper[4834]: I0121 15:51:06.614411 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh7hk\" (UniqueName: \"kubernetes.io/projected/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-kube-api-access-bh7hk\") pod \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\" (UID: \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\") " Jan 21 15:51:06 crc kubenswrapper[4834]: I0121 15:51:06.614499 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-operator-scripts\") pod \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\" (UID: \"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd\") " Jan 21 15:51:06 crc kubenswrapper[4834]: I0121 15:51:06.615399 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3421b9b-83cb-488d-8b6c-f1e6d7303ccd" (UID: "e3421b9b-83cb-488d-8b6c-f1e6d7303ccd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4834]: I0121 15:51:06.628288 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-kube-api-access-bh7hk" (OuterVolumeSpecName: "kube-api-access-bh7hk") pod "e3421b9b-83cb-488d-8b6c-f1e6d7303ccd" (UID: "e3421b9b-83cb-488d-8b6c-f1e6d7303ccd"). InnerVolumeSpecName "kube-api-access-bh7hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4834]: I0121 15:51:06.716822 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh7hk\" (UniqueName: \"kubernetes.io/projected/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-kube-api-access-bh7hk\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4834]: I0121 15:51:06.716887 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:07 crc kubenswrapper[4834]: I0121 15:51:07.249606 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8zx8w" event={"ID":"e3421b9b-83cb-488d-8b6c-f1e6d7303ccd","Type":"ContainerDied","Data":"9230703ef11ad257e766d93e92ab568a9daac6918d9783f0325a4b0ccdfa48cf"} Jan 21 15:51:07 crc kubenswrapper[4834]: I0121 15:51:07.249658 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9230703ef11ad257e766d93e92ab568a9daac6918d9783f0325a4b0ccdfa48cf" Jan 21 15:51:07 crc kubenswrapper[4834]: I0121 15:51:07.249676 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8zx8w" Jan 21 15:51:07 crc kubenswrapper[4834]: I0121 15:51:07.251348 4834 generic.go:334] "Generic (PLEG): container finished" podID="0bcdd395-4d82-4632-b6ae-deb0da55378e" containerID="ffde81093f28e04236f7a40063777aaa059e0e677f5062a18aa95cca36e50d7f" exitCode=0 Jan 21 15:51:07 crc kubenswrapper[4834]: I0121 15:51:07.251382 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bcdd395-4d82-4632-b6ae-deb0da55378e","Type":"ContainerDied","Data":"ffde81093f28e04236f7a40063777aaa059e0e677f5062a18aa95cca36e50d7f"} Jan 21 15:51:08 crc kubenswrapper[4834]: I0121 15:51:08.262284 4834 generic.go:334] "Generic (PLEG): container finished" podID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" containerID="c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028" exitCode=0 Jan 21 15:51:08 crc kubenswrapper[4834]: I0121 15:51:08.262362 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7","Type":"ContainerDied","Data":"c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028"} Jan 21 15:51:08 crc kubenswrapper[4834]: I0121 15:51:08.265400 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bcdd395-4d82-4632-b6ae-deb0da55378e","Type":"ContainerStarted","Data":"beb37555c67a6c03275ebdb6af0f2bad1e1b6aa27a96e2c5a7d3d880ed4da79e"} Jan 21 15:51:08 crc kubenswrapper[4834]: I0121 15:51:08.265667 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:51:09 crc kubenswrapper[4834]: I0121 15:51:09.275644 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7","Type":"ContainerStarted","Data":"583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc"} Jan 21 15:51:09 crc kubenswrapper[4834]: I0121 15:51:09.275991 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:09 crc kubenswrapper[4834]: I0121 15:51:09.311555 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.311521471 podStartE2EDuration="38.311521471s" podCreationTimestamp="2026-01-21 15:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:08.320489821 +0000 UTC m=+4814.294838876" watchObservedRunningTime="2026-01-21 15:51:09.311521471 +0000 UTC m=+4815.285870516" Jan 21 15:51:09 crc kubenswrapper[4834]: I0121 15:51:09.311812 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.31180407 podStartE2EDuration="38.31180407s" podCreationTimestamp="2026-01-21 15:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:09.299837456 +0000 UTC m=+4815.274186531" watchObservedRunningTime="2026-01-21 15:51:09.31180407 +0000 UTC m=+4815.286153115" Jan 21 15:51:17 crc kubenswrapper[4834]: I0121 15:51:17.113597 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:51:17 crc kubenswrapper[4834]: I0121 15:51:17.115049 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:51:17 crc kubenswrapper[4834]: I0121 15:51:17.115099 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:51:17 crc kubenswrapper[4834]: I0121 15:51:17.115761 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcb83c48b1306784ab7a9755c35c5dd73199c7ff7870be731322df946b6513c0"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:51:17 crc kubenswrapper[4834]: I0121 15:51:17.115833 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://dcb83c48b1306784ab7a9755c35c5dd73199c7ff7870be731322df946b6513c0" gracePeriod=600 Jan 21 15:51:17 crc kubenswrapper[4834]: I0121 15:51:17.343921 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="dcb83c48b1306784ab7a9755c35c5dd73199c7ff7870be731322df946b6513c0" exitCode=0 Jan 21 15:51:17 crc kubenswrapper[4834]: I0121 15:51:17.343960 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"dcb83c48b1306784ab7a9755c35c5dd73199c7ff7870be731322df946b6513c0"} Jan 21 15:51:17 crc kubenswrapper[4834]: I0121 15:51:17.344017 4834 scope.go:117] "RemoveContainer" containerID="5b67394bf9d2b718c1d701d03c7b412d3af3e4be9af78faeb176b2e46bdba92e" Jan 21 15:51:18 crc kubenswrapper[4834]: I0121 15:51:18.354908 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c"} Jan 21 15:51:22 crc kubenswrapper[4834]: I0121 15:51:22.538531 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:51:22 crc kubenswrapper[4834]: I0121 15:51:22.955202 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.484636 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8p798"] Jan 21 15:51:25 crc kubenswrapper[4834]: E0121 15:51:25.485398 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3421b9b-83cb-488d-8b6c-f1e6d7303ccd" containerName="mariadb-account-create-update" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.485413 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3421b9b-83cb-488d-8b6c-f1e6d7303ccd" containerName="mariadb-account-create-update" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.485592 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3421b9b-83cb-488d-8b6c-f1e6d7303ccd" containerName="mariadb-account-create-update" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.486590 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.499195 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8p798"] Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.632711 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-dns-svc\") pod \"dnsmasq-dns-699964fbc-8p798\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.632785 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-config\") pod \"dnsmasq-dns-699964fbc-8p798\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.632833 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4h5g\" (UniqueName: \"kubernetes.io/projected/2ae99f7a-abb1-41a2-9aca-179841af7226-kube-api-access-q4h5g\") pod \"dnsmasq-dns-699964fbc-8p798\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.734139 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-dns-svc\") pod \"dnsmasq-dns-699964fbc-8p798\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.734301 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-config\") pod \"dnsmasq-dns-699964fbc-8p798\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.734413 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4h5g\" (UniqueName: \"kubernetes.io/projected/2ae99f7a-abb1-41a2-9aca-179841af7226-kube-api-access-q4h5g\") pod \"dnsmasq-dns-699964fbc-8p798\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.735787 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-dns-svc\") pod \"dnsmasq-dns-699964fbc-8p798\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.735976 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-config\") pod \"dnsmasq-dns-699964fbc-8p798\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.755391 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4h5g\" (UniqueName: \"kubernetes.io/projected/2ae99f7a-abb1-41a2-9aca-179841af7226-kube-api-access-q4h5g\") pod \"dnsmasq-dns-699964fbc-8p798\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:25 crc kubenswrapper[4834]: I0121 15:51:25.819443 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:26 crc kubenswrapper[4834]: I0121 15:51:26.235113 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:26 crc kubenswrapper[4834]: I0121 15:51:26.271118 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8p798"] Jan 21 15:51:26 crc kubenswrapper[4834]: I0121 15:51:26.419780 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8p798" event={"ID":"2ae99f7a-abb1-41a2-9aca-179841af7226","Type":"ContainerStarted","Data":"412ff08abde6667919b126d64729a7131d0723ae17c2a674781218269e0993f6"} Jan 21 15:51:26 crc kubenswrapper[4834]: I0121 15:51:26.946271 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:27 crc kubenswrapper[4834]: I0121 15:51:27.430665 4834 generic.go:334] "Generic (PLEG): container finished" podID="2ae99f7a-abb1-41a2-9aca-179841af7226" containerID="78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247" exitCode=0 Jan 21 15:51:27 crc kubenswrapper[4834]: I0121 15:51:27.430727 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8p798" event={"ID":"2ae99f7a-abb1-41a2-9aca-179841af7226","Type":"ContainerDied","Data":"78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247"} Jan 21 15:51:28 crc kubenswrapper[4834]: I0121 15:51:28.157982 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0bcdd395-4d82-4632-b6ae-deb0da55378e" containerName="rabbitmq" containerID="cri-o://beb37555c67a6c03275ebdb6af0f2bad1e1b6aa27a96e2c5a7d3d880ed4da79e" gracePeriod=604799 Jan 21 15:51:28 crc kubenswrapper[4834]: I0121 15:51:28.440866 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8p798" event={"ID":"2ae99f7a-abb1-41a2-9aca-179841af7226","Type":"ContainerStarted","Data":"e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1"} Jan 21 15:51:28 crc kubenswrapper[4834]: I0121 15:51:28.441118 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:28 crc kubenswrapper[4834]: I0121 15:51:28.465961 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-8p798" podStartSLOduration=3.465919766 podStartE2EDuration="3.465919766s" podCreationTimestamp="2026-01-21 15:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:28.457833604 +0000 UTC m=+4834.432182689" watchObservedRunningTime="2026-01-21 15:51:28.465919766 +0000 UTC m=+4834.440268821" Jan 21 15:51:28 crc kubenswrapper[4834]: I0121 15:51:28.711097 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" containerName="rabbitmq" containerID="cri-o://583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc" gracePeriod=604799 Jan 21 15:51:32 crc kubenswrapper[4834]: I0121 15:51:32.533880 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0bcdd395-4d82-4632-b6ae-deb0da55378e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.241:5672: connect: connection refused" Jan 21 15:51:32 crc kubenswrapper[4834]: I0121 15:51:32.951221 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.242:5672: connect: connection refused" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.495854 4834 generic.go:334] "Generic (PLEG): container finished" podID="0bcdd395-4d82-4632-b6ae-deb0da55378e" containerID="beb37555c67a6c03275ebdb6af0f2bad1e1b6aa27a96e2c5a7d3d880ed4da79e" exitCode=0 Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.495999 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bcdd395-4d82-4632-b6ae-deb0da55378e","Type":"ContainerDied","Data":"beb37555c67a6c03275ebdb6af0f2bad1e1b6aa27a96e2c5a7d3d880ed4da79e"} Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.798373 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.883206 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bcdd395-4d82-4632-b6ae-deb0da55378e-erlang-cookie-secret\") pod \"0bcdd395-4d82-4632-b6ae-deb0da55378e\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.883364 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-confd\") pod \"0bcdd395-4d82-4632-b6ae-deb0da55378e\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.883410 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-erlang-cookie\") pod \"0bcdd395-4d82-4632-b6ae-deb0da55378e\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.883448 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-server-conf\") pod \"0bcdd395-4d82-4632-b6ae-deb0da55378e\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.883561 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5lkz\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-kube-api-access-w5lkz\") pod \"0bcdd395-4d82-4632-b6ae-deb0da55378e\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.883647 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bcdd395-4d82-4632-b6ae-deb0da55378e-pod-info\") pod \"0bcdd395-4d82-4632-b6ae-deb0da55378e\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.883694 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-plugins-conf\") pod \"0bcdd395-4d82-4632-b6ae-deb0da55378e\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.883720 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-plugins\") pod \"0bcdd395-4d82-4632-b6ae-deb0da55378e\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.884116 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") pod \"0bcdd395-4d82-4632-b6ae-deb0da55378e\" (UID: \"0bcdd395-4d82-4632-b6ae-deb0da55378e\") " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.886287 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0bcdd395-4d82-4632-b6ae-deb0da55378e" (UID: "0bcdd395-4d82-4632-b6ae-deb0da55378e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.886515 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0bcdd395-4d82-4632-b6ae-deb0da55378e" (UID: "0bcdd395-4d82-4632-b6ae-deb0da55378e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.888017 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0bcdd395-4d82-4632-b6ae-deb0da55378e" (UID: "0bcdd395-4d82-4632-b6ae-deb0da55378e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.907595 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-kube-api-access-w5lkz" (OuterVolumeSpecName: "kube-api-access-w5lkz") pod "0bcdd395-4d82-4632-b6ae-deb0da55378e" (UID: "0bcdd395-4d82-4632-b6ae-deb0da55378e"). InnerVolumeSpecName "kube-api-access-w5lkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.911817 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0bcdd395-4d82-4632-b6ae-deb0da55378e-pod-info" (OuterVolumeSpecName: "pod-info") pod "0bcdd395-4d82-4632-b6ae-deb0da55378e" (UID: "0bcdd395-4d82-4632-b6ae-deb0da55378e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.912226 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bcdd395-4d82-4632-b6ae-deb0da55378e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0bcdd395-4d82-4632-b6ae-deb0da55378e" (UID: "0bcdd395-4d82-4632-b6ae-deb0da55378e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.916043 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398" (OuterVolumeSpecName: "persistence") pod "0bcdd395-4d82-4632-b6ae-deb0da55378e" (UID: "0bcdd395-4d82-4632-b6ae-deb0da55378e"). InnerVolumeSpecName "pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.920076 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-server-conf" (OuterVolumeSpecName: "server-conf") pod "0bcdd395-4d82-4632-b6ae-deb0da55378e" (UID: "0bcdd395-4d82-4632-b6ae-deb0da55378e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.979074 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0bcdd395-4d82-4632-b6ae-deb0da55378e" (UID: "0bcdd395-4d82-4632-b6ae-deb0da55378e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.986654 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") on node \"crc\" " Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.986695 4834 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bcdd395-4d82-4632-b6ae-deb0da55378e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.986709 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.986724 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.986739 4834 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.986750 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5lkz\" (UniqueName: \"kubernetes.io/projected/0bcdd395-4d82-4632-b6ae-deb0da55378e-kube-api-access-w5lkz\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.986763 4834 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bcdd395-4d82-4632-b6ae-deb0da55378e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.986774 4834 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bcdd395-4d82-4632-b6ae-deb0da55378e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:34 crc kubenswrapper[4834]: I0121 15:51:34.986785 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bcdd395-4d82-4632-b6ae-deb0da55378e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.016764 4834 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.017133 4834 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398") on node "crc" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.088516 4834 reconciler_common.go:293] "Volume detached for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.249674 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.391344 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-plugins-conf\") pod \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.391413 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-confd\") pod \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.391459 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-pod-info\") pod \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.391489 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-server-conf\") pod \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.391520 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-plugins\") pod \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.391563 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-erlang-cookie-secret\") pod \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.391691 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") pod \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.391802 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bng2\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-kube-api-access-9bng2\") pod \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.392061 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" (UID: "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.392394 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-erlang-cookie\") pod \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\" (UID: \"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7\") " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.392634 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" (UID: "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.392654 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" (UID: "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.396087 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.396285 4834 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.396312 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.397244 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-pod-info" (OuterVolumeSpecName: "pod-info") pod "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" (UID: "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.397294 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-kube-api-access-9bng2" (OuterVolumeSpecName: "kube-api-access-9bng2") pod "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" (UID: "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7"). InnerVolumeSpecName "kube-api-access-9bng2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.400208 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" (UID: "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.410873 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad" (OuterVolumeSpecName: "persistence") pod "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" (UID: "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7"). InnerVolumeSpecName "pvc-0912a214-fbd9-43cb-b411-3aac05e959ad". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.416112 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-server-conf" (OuterVolumeSpecName: "server-conf") pod "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" (UID: "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.466716 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" (UID: "9af1ebf0-0161-4f52-9cf5-5afffcaa68c7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.497683 4834 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.497725 4834 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.497736 4834 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.497749 4834 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.497800 4834 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") on node \"crc\" " Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.497814 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bng2\" (UniqueName: \"kubernetes.io/projected/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7-kube-api-access-9bng2\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.507971 4834 generic.go:334] "Generic (PLEG): container finished" podID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" containerID="583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc" exitCode=0 Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.508032 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.508054 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7","Type":"ContainerDied","Data":"583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc"} Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.508090 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9af1ebf0-0161-4f52-9cf5-5afffcaa68c7","Type":"ContainerDied","Data":"dc1866e8fd8ebab884a01251a5d4d144b625331f55f601c5866fb7fda2e59569"} Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.508110 4834 scope.go:117] "RemoveContainer" containerID="583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.512980 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bcdd395-4d82-4632-b6ae-deb0da55378e","Type":"ContainerDied","Data":"e5b1d54f820b90cffa11ddf280148ae14d705ea3371e2fbb98c9cb8445b48c0f"} Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.513070 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.515584 4834 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.515742 4834 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0912a214-fbd9-43cb-b411-3aac05e959ad" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad") on node "crc" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.539433 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.545329 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.546065 4834 scope.go:117] "RemoveContainer" containerID="c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.561200 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.573998 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.596026 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.596148 4834 scope.go:117] "RemoveContainer" containerID="583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc" Jan 21 15:51:35 crc kubenswrapper[4834]: E0121 15:51:35.596505 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" containerName="setup-container" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.596532 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" containerName="setup-container" Jan 21 15:51:35 crc kubenswrapper[4834]: E0121 15:51:35.596544 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcdd395-4d82-4632-b6ae-deb0da55378e" containerName="rabbitmq" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.596553 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcdd395-4d82-4632-b6ae-deb0da55378e" containerName="rabbitmq" Jan 21 15:51:35 crc kubenswrapper[4834]: E0121 15:51:35.596578 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcdd395-4d82-4632-b6ae-deb0da55378e" containerName="setup-container" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.596589 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcdd395-4d82-4632-b6ae-deb0da55378e" containerName="setup-container" Jan 21 15:51:35 crc kubenswrapper[4834]: E0121 15:51:35.596602 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" containerName="rabbitmq" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.596611 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" containerName="rabbitmq" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.596807 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcdd395-4d82-4632-b6ae-deb0da55378e" containerName="rabbitmq" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.596834 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" containerName="rabbitmq" Jan 21 15:51:35 crc kubenswrapper[4834]: E0121 15:51:35.597544 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc\": container with ID starting with 583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc not found: ID does not exist" containerID="583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.597584 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc"} err="failed to get container status \"583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc\": rpc error: code = NotFound desc = could not find container \"583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc\": container with ID starting with 583b50a0ebfbb7da305de89db98a45a0cc96b32f37014f2db7fec54bfc1b3ccc not found: ID does not exist" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.597612 4834 scope.go:117] "RemoveContainer" containerID="c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.597941 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.599646 4834 reconciler_common.go:293] "Volume detached for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.600392 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 15:51:35 crc kubenswrapper[4834]: E0121 15:51:35.600488 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028\": container with ID starting with c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028 not found: ID does not exist" containerID="c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.600519 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028"} err="failed to get container status \"c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028\": rpc error: code = NotFound desc = could not find container \"c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028\": container with ID starting with c1ebdfb0eafd7a44e032c781d4247ed0a4dd73f7927cd66629f3299c44752028 not found: ID does not exist" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.600537 4834 scope.go:117] "RemoveContainer" containerID="beb37555c67a6c03275ebdb6af0f2bad1e1b6aa27a96e2c5a7d3d880ed4da79e" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.600614 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.600833 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.601003 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.601252 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rpl9f" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.612331 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.617960 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.619651 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.623527 4834 scope.go:117] "RemoveContainer" containerID="ffde81093f28e04236f7a40063777aaa059e0e677f5062a18aa95cca36e50d7f" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.624629 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.624721 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.624882 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-b2rhv" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.625549 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.625790 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.626412 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701314 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b94bf96-a478-4e6d-adce-ef88ef069f9a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701380 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b925ea00-8dd3-4ced-add2-41483f9b4d63-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701426 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701450 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thqff\" (UniqueName: \"kubernetes.io/projected/5b94bf96-a478-4e6d-adce-ef88ef069f9a-kube-api-access-thqff\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701560 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b94bf96-a478-4e6d-adce-ef88ef069f9a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701602 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b94bf96-a478-4e6d-adce-ef88ef069f9a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701661 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b925ea00-8dd3-4ced-add2-41483f9b4d63-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701695 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b94bf96-a478-4e6d-adce-ef88ef069f9a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701737 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b94bf96-a478-4e6d-adce-ef88ef069f9a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701753 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b925ea00-8dd3-4ced-add2-41483f9b4d63-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701784 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b94bf96-a478-4e6d-adce-ef88ef069f9a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701842 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b925ea00-8dd3-4ced-add2-41483f9b4d63-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701879 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b925ea00-8dd3-4ced-add2-41483f9b4d63-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701921 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701969 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b925ea00-8dd3-4ced-add2-41483f9b4d63-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.701986 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wr4m\" (UniqueName: \"kubernetes.io/projected/b925ea00-8dd3-4ced-add2-41483f9b4d63-kube-api-access-9wr4m\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.702027 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b925ea00-8dd3-4ced-add2-41483f9b4d63-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.702070 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b94bf96-a478-4e6d-adce-ef88ef069f9a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803102 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b925ea00-8dd3-4ced-add2-41483f9b4d63-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803150 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b94bf96-a478-4e6d-adce-ef88ef069f9a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803187 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b94bf96-a478-4e6d-adce-ef88ef069f9a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803207 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b925ea00-8dd3-4ced-add2-41483f9b4d63-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803238 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803255 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thqff\" (UniqueName: \"kubernetes.io/projected/5b94bf96-a478-4e6d-adce-ef88ef069f9a-kube-api-access-thqff\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803281 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b94bf96-a478-4e6d-adce-ef88ef069f9a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803295 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b94bf96-a478-4e6d-adce-ef88ef069f9a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803313 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b925ea00-8dd3-4ced-add2-41483f9b4d63-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803332 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b94bf96-a478-4e6d-adce-ef88ef069f9a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803351 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b94bf96-a478-4e6d-adce-ef88ef069f9a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803367 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b925ea00-8dd3-4ced-add2-41483f9b4d63-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803386 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b94bf96-a478-4e6d-adce-ef88ef069f9a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803413 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b925ea00-8dd3-4ced-add2-41483f9b4d63-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803432 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b925ea00-8dd3-4ced-add2-41483f9b4d63-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803456 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803472 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b925ea00-8dd3-4ced-add2-41483f9b4d63-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.803489 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wr4m\" (UniqueName: \"kubernetes.io/projected/b925ea00-8dd3-4ced-add2-41483f9b4d63-kube-api-access-9wr4m\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.804839 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b925ea00-8dd3-4ced-add2-41483f9b4d63-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.805223 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5b94bf96-a478-4e6d-adce-ef88ef069f9a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.805267 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5b94bf96-a478-4e6d-adce-ef88ef069f9a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.805416 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b925ea00-8dd3-4ced-add2-41483f9b4d63-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.805744 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5b94bf96-a478-4e6d-adce-ef88ef069f9a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.806114 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b925ea00-8dd3-4ced-add2-41483f9b4d63-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.806209 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b925ea00-8dd3-4ced-add2-41483f9b4d63-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.807025 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5b94bf96-a478-4e6d-adce-ef88ef069f9a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.808580 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b925ea00-8dd3-4ced-add2-41483f9b4d63-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.808597 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5b94bf96-a478-4e6d-adce-ef88ef069f9a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.808622 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5b94bf96-a478-4e6d-adce-ef88ef069f9a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.809059 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b925ea00-8dd3-4ced-add2-41483f9b4d63-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.810234 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b925ea00-8dd3-4ced-add2-41483f9b4d63-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.810291 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5b94bf96-a478-4e6d-adce-ef88ef069f9a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.810477 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.810513 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7348a3a4c87ad15691f070cedb4f3e77a1f723dc7eb0f4e1690a0ddd1e340792/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.810539 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.810566 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4b512534cc5a7f21b33807f7ee05096a151cc01b256b947edd979789f15adf56/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.822558 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.829516 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thqff\" (UniqueName: \"kubernetes.io/projected/5b94bf96-a478-4e6d-adce-ef88ef069f9a-kube-api-access-thqff\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.831064 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wr4m\" (UniqueName: \"kubernetes.io/projected/b925ea00-8dd3-4ced-add2-41483f9b4d63-kube-api-access-9wr4m\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.846963 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-91b8da66-fa50-407f-93ad-a7b4ec0b2398\") pod \"rabbitmq-server-0\" (UID: \"5b94bf96-a478-4e6d-adce-ef88ef069f9a\") " pod="openstack/rabbitmq-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.850901 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0912a214-fbd9-43cb-b411-3aac05e959ad\") pod \"rabbitmq-cell1-server-0\" (UID: \"b925ea00-8dd3-4ced-add2-41483f9b4d63\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.877462 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-qvwmd"] Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.877799 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" podUID="7940c21a-b6dd-4f1b-a744-d36ed1478875" containerName="dnsmasq-dns" containerID="cri-o://f1471b709271c1dbffd5ff03a6478079e7eedc9507798f8ff5824e9bde65e1e8" gracePeriod=10 Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.940997 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:51:35 crc kubenswrapper[4834]: I0121 15:51:35.955706 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.333526 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bcdd395-4d82-4632-b6ae-deb0da55378e" path="/var/lib/kubelet/pods/0bcdd395-4d82-4632-b6ae-deb0da55378e/volumes" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.334589 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af1ebf0-0161-4f52-9cf5-5afffcaa68c7" path="/var/lib/kubelet/pods/9af1ebf0-0161-4f52-9cf5-5afffcaa68c7/volumes" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.527399 4834 generic.go:334] "Generic (PLEG): container finished" podID="7940c21a-b6dd-4f1b-a744-d36ed1478875" containerID="f1471b709271c1dbffd5ff03a6478079e7eedc9507798f8ff5824e9bde65e1e8" exitCode=0 Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.527468 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" event={"ID":"7940c21a-b6dd-4f1b-a744-d36ed1478875","Type":"ContainerDied","Data":"f1471b709271c1dbffd5ff03a6478079e7eedc9507798f8ff5824e9bde65e1e8"} Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.527512 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" event={"ID":"7940c21a-b6dd-4f1b-a744-d36ed1478875","Type":"ContainerDied","Data":"e9d1e93ec120e15cdbd7280b904a2a766d38c441266844e54ca34d3eaf2dc7ac"} Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.527526 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9d1e93ec120e15cdbd7280b904a2a766d38c441266844e54ca34d3eaf2dc7ac" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.588824 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.633100 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.704368 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 15:51:36 crc kubenswrapper[4834]: W0121 15:51:36.706073 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b94bf96_a478_4e6d_adce_ef88ef069f9a.slice/crio-38a81afc07a122efcfcb956d0af77d0b8daa613dcde43f2859c22691b0a11ee1 WatchSource:0}: Error finding container 38a81afc07a122efcfcb956d0af77d0b8daa613dcde43f2859c22691b0a11ee1: Status 404 returned error can't find the container with id 38a81afc07a122efcfcb956d0af77d0b8daa613dcde43f2859c22691b0a11ee1 Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.742985 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-dns-svc\") pod \"7940c21a-b6dd-4f1b-a744-d36ed1478875\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.743110 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbvt9\" (UniqueName: \"kubernetes.io/projected/7940c21a-b6dd-4f1b-a744-d36ed1478875-kube-api-access-lbvt9\") pod \"7940c21a-b6dd-4f1b-a744-d36ed1478875\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.743624 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-config\") pod \"7940c21a-b6dd-4f1b-a744-d36ed1478875\" (UID: \"7940c21a-b6dd-4f1b-a744-d36ed1478875\") " Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.748293 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7940c21a-b6dd-4f1b-a744-d36ed1478875-kube-api-access-lbvt9" (OuterVolumeSpecName: "kube-api-access-lbvt9") pod "7940c21a-b6dd-4f1b-a744-d36ed1478875" (UID: "7940c21a-b6dd-4f1b-a744-d36ed1478875"). InnerVolumeSpecName "kube-api-access-lbvt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.782353 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7940c21a-b6dd-4f1b-a744-d36ed1478875" (UID: "7940c21a-b6dd-4f1b-a744-d36ed1478875"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.783763 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-config" (OuterVolumeSpecName: "config") pod "7940c21a-b6dd-4f1b-a744-d36ed1478875" (UID: "7940c21a-b6dd-4f1b-a744-d36ed1478875"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.846200 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbvt9\" (UniqueName: \"kubernetes.io/projected/7940c21a-b6dd-4f1b-a744-d36ed1478875-kube-api-access-lbvt9\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.846235 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:36 crc kubenswrapper[4834]: I0121 15:51:36.846244 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7940c21a-b6dd-4f1b-a744-d36ed1478875-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:37 crc kubenswrapper[4834]: I0121 15:51:37.539052 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b94bf96-a478-4e6d-adce-ef88ef069f9a","Type":"ContainerStarted","Data":"38a81afc07a122efcfcb956d0af77d0b8daa613dcde43f2859c22691b0a11ee1"} Jan 21 15:51:37 crc kubenswrapper[4834]: I0121 15:51:37.541138 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b925ea00-8dd3-4ced-add2-41483f9b4d63","Type":"ContainerStarted","Data":"7e612ef0fc4494957d98c834b37f9deea3345c92405db894771af7227c743084"} Jan 21 15:51:37 crc kubenswrapper[4834]: I0121 15:51:37.541204 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-qvwmd" Jan 21 15:51:37 crc kubenswrapper[4834]: I0121 15:51:37.580335 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-qvwmd"] Jan 21 15:51:37 crc kubenswrapper[4834]: I0121 15:51:37.586292 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-qvwmd"] Jan 21 15:51:38 crc kubenswrapper[4834]: I0121 15:51:38.333804 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7940c21a-b6dd-4f1b-a744-d36ed1478875" path="/var/lib/kubelet/pods/7940c21a-b6dd-4f1b-a744-d36ed1478875/volumes" Jan 21 15:51:38 crc kubenswrapper[4834]: I0121 15:51:38.549864 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b94bf96-a478-4e6d-adce-ef88ef069f9a","Type":"ContainerStarted","Data":"06a23feb2b411c40de31bb232b64f708409b2871bfa3c2753c71fb6668639ed9"} Jan 21 15:51:38 crc kubenswrapper[4834]: I0121 15:51:38.551099 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b925ea00-8dd3-4ced-add2-41483f9b4d63","Type":"ContainerStarted","Data":"4cbf30d243a2c0c55e97024bfc1d34b325b83b7888f4b71be907bd9ef58d08a8"} Jan 21 15:52:10 crc kubenswrapper[4834]: I0121 15:52:10.815149 4834 generic.go:334] "Generic (PLEG): container finished" podID="b925ea00-8dd3-4ced-add2-41483f9b4d63" containerID="4cbf30d243a2c0c55e97024bfc1d34b325b83b7888f4b71be907bd9ef58d08a8" exitCode=0 Jan 21 15:52:10 crc kubenswrapper[4834]: I0121 15:52:10.815240 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b925ea00-8dd3-4ced-add2-41483f9b4d63","Type":"ContainerDied","Data":"4cbf30d243a2c0c55e97024bfc1d34b325b83b7888f4b71be907bd9ef58d08a8"} Jan 21 15:52:10 crc kubenswrapper[4834]: I0121 15:52:10.821416 4834 generic.go:334] "Generic (PLEG): container finished" podID="5b94bf96-a478-4e6d-adce-ef88ef069f9a" containerID="06a23feb2b411c40de31bb232b64f708409b2871bfa3c2753c71fb6668639ed9" exitCode=0 Jan 21 15:52:10 crc kubenswrapper[4834]: I0121 15:52:10.821466 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b94bf96-a478-4e6d-adce-ef88ef069f9a","Type":"ContainerDied","Data":"06a23feb2b411c40de31bb232b64f708409b2871bfa3c2753c71fb6668639ed9"} Jan 21 15:52:11 crc kubenswrapper[4834]: I0121 15:52:11.833593 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b925ea00-8dd3-4ced-add2-41483f9b4d63","Type":"ContainerStarted","Data":"c0a85c3def79ec8dfc8154669ab1afacc0c57cb5c8c548580c9b93bbcda8eb69"} Jan 21 15:52:11 crc kubenswrapper[4834]: I0121 15:52:11.835329 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:52:11 crc kubenswrapper[4834]: I0121 15:52:11.837122 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5b94bf96-a478-4e6d-adce-ef88ef069f9a","Type":"ContainerStarted","Data":"379d721a1516c5a773e0696cccabc1edb0035b2894442d7ffb28f9832ca775fa"} Jan 21 15:52:11 crc kubenswrapper[4834]: I0121 15:52:11.837403 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 15:52:11 crc kubenswrapper[4834]: I0121 15:52:11.870788 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.870755844 podStartE2EDuration="36.870755844s" podCreationTimestamp="2026-01-21 15:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:11.86454154 +0000 UTC m=+4877.838890635" watchObservedRunningTime="2026-01-21 15:52:11.870755844 +0000 UTC m=+4877.845104929" Jan 21 15:52:11 crc kubenswrapper[4834]: I0121 15:52:11.906154 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.906125658 podStartE2EDuration="36.906125658s" podCreationTimestamp="2026-01-21 15:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:11.900779801 +0000 UTC m=+4877.875128866" watchObservedRunningTime="2026-01-21 15:52:11.906125658 +0000 UTC m=+4877.880474713" Jan 21 15:52:25 crc kubenswrapper[4834]: I0121 15:52:25.944143 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 15:52:25 crc kubenswrapper[4834]: I0121 15:52:25.980319 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.219408 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 15:52:37 crc kubenswrapper[4834]: E0121 15:52:37.222478 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7940c21a-b6dd-4f1b-a744-d36ed1478875" containerName="dnsmasq-dns" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.222505 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7940c21a-b6dd-4f1b-a744-d36ed1478875" containerName="dnsmasq-dns" Jan 21 15:52:37 crc kubenswrapper[4834]: E0121 15:52:37.222546 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7940c21a-b6dd-4f1b-a744-d36ed1478875" containerName="init" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.222554 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7940c21a-b6dd-4f1b-a744-d36ed1478875" containerName="init" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.222729 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="7940c21a-b6dd-4f1b-a744-d36ed1478875" containerName="dnsmasq-dns" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.223536 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.226172 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qh25h" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.233874 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.315188 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvn8w\" (UniqueName: \"kubernetes.io/projected/036c00d9-e7ff-4a65-98fc-79e3941cc5d9-kube-api-access-bvn8w\") pod \"mariadb-client\" (UID: \"036c00d9-e7ff-4a65-98fc-79e3941cc5d9\") " pod="openstack/mariadb-client" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.416688 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvn8w\" (UniqueName: \"kubernetes.io/projected/036c00d9-e7ff-4a65-98fc-79e3941cc5d9-kube-api-access-bvn8w\") pod \"mariadb-client\" (UID: \"036c00d9-e7ff-4a65-98fc-79e3941cc5d9\") " pod="openstack/mariadb-client" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.439081 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvn8w\" (UniqueName: \"kubernetes.io/projected/036c00d9-e7ff-4a65-98fc-79e3941cc5d9-kube-api-access-bvn8w\") pod \"mariadb-client\" (UID: \"036c00d9-e7ff-4a65-98fc-79e3941cc5d9\") " pod="openstack/mariadb-client" Jan 21 15:52:37 crc kubenswrapper[4834]: I0121 15:52:37.551969 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:52:38 crc kubenswrapper[4834]: I0121 15:52:38.158721 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:52:38 crc kubenswrapper[4834]: I0121 15:52:38.168759 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:52:38 crc kubenswrapper[4834]: I0121 15:52:38.984100 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"036c00d9-e7ff-4a65-98fc-79e3941cc5d9","Type":"ContainerStarted","Data":"2f0fa0559b3072d876af5d1145c5a79cb50087dff494c94d288f380f4dfb1a22"} Jan 21 15:52:38 crc kubenswrapper[4834]: I0121 15:52:38.984457 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"036c00d9-e7ff-4a65-98fc-79e3941cc5d9","Type":"ContainerStarted","Data":"24141234b2464e8730290da79a7f5d6a57da3032bb7839de3eb4c708d5a4e85b"} Jan 21 15:52:39 crc kubenswrapper[4834]: I0121 15:52:39.010170 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.447306387 podStartE2EDuration="2.010133472s" podCreationTimestamp="2026-01-21 15:52:37 +0000 UTC" firstStartedPulling="2026-01-21 15:52:38.168526136 +0000 UTC m=+4904.142875181" lastFinishedPulling="2026-01-21 15:52:38.731353221 +0000 UTC m=+4904.705702266" observedRunningTime="2026-01-21 15:52:39.000197582 +0000 UTC m=+4904.974546637" watchObservedRunningTime="2026-01-21 15:52:39.010133472 +0000 UTC m=+4904.984482557" Jan 21 15:52:43 crc kubenswrapper[4834]: E0121 15:52:43.277296 4834 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.45:34810->38.102.83.45:34455: write tcp 38.102.83.45:34810->38.102.83.45:34455: write: broken pipe Jan 21 15:52:47 crc kubenswrapper[4834]: E0121 15:52:47.500345 4834 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.45:34836->38.102.83.45:34455: write tcp 38.102.83.45:34836->38.102.83.45:34455: write: broken pipe Jan 21 15:52:52 crc kubenswrapper[4834]: I0121 15:52:52.323392 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:52:52 crc kubenswrapper[4834]: I0121 15:52:52.326358 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="036c00d9-e7ff-4a65-98fc-79e3941cc5d9" containerName="mariadb-client" containerID="cri-o://2f0fa0559b3072d876af5d1145c5a79cb50087dff494c94d288f380f4dfb1a22" gracePeriod=30 Jan 21 15:52:53 crc kubenswrapper[4834]: I0121 15:52:53.129692 4834 generic.go:334] "Generic (PLEG): container finished" podID="036c00d9-e7ff-4a65-98fc-79e3941cc5d9" containerID="2f0fa0559b3072d876af5d1145c5a79cb50087dff494c94d288f380f4dfb1a22" exitCode=143 Jan 21 15:52:53 crc kubenswrapper[4834]: I0121 15:52:53.130181 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"036c00d9-e7ff-4a65-98fc-79e3941cc5d9","Type":"ContainerDied","Data":"2f0fa0559b3072d876af5d1145c5a79cb50087dff494c94d288f380f4dfb1a22"} Jan 21 15:52:53 crc kubenswrapper[4834]: I0121 15:52:53.339338 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:52:53 crc kubenswrapper[4834]: I0121 15:52:53.501353 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvn8w\" (UniqueName: \"kubernetes.io/projected/036c00d9-e7ff-4a65-98fc-79e3941cc5d9-kube-api-access-bvn8w\") pod \"036c00d9-e7ff-4a65-98fc-79e3941cc5d9\" (UID: \"036c00d9-e7ff-4a65-98fc-79e3941cc5d9\") " Jan 21 15:52:53 crc kubenswrapper[4834]: I0121 15:52:53.510363 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/036c00d9-e7ff-4a65-98fc-79e3941cc5d9-kube-api-access-bvn8w" (OuterVolumeSpecName: "kube-api-access-bvn8w") pod "036c00d9-e7ff-4a65-98fc-79e3941cc5d9" (UID: "036c00d9-e7ff-4a65-98fc-79e3941cc5d9"). InnerVolumeSpecName "kube-api-access-bvn8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:53 crc kubenswrapper[4834]: I0121 15:52:53.604108 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvn8w\" (UniqueName: \"kubernetes.io/projected/036c00d9-e7ff-4a65-98fc-79e3941cc5d9-kube-api-access-bvn8w\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:54 crc kubenswrapper[4834]: I0121 15:52:54.141220 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"036c00d9-e7ff-4a65-98fc-79e3941cc5d9","Type":"ContainerDied","Data":"24141234b2464e8730290da79a7f5d6a57da3032bb7839de3eb4c708d5a4e85b"} Jan 21 15:52:54 crc kubenswrapper[4834]: I0121 15:52:54.141326 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:52:54 crc kubenswrapper[4834]: I0121 15:52:54.141852 4834 scope.go:117] "RemoveContainer" containerID="2f0fa0559b3072d876af5d1145c5a79cb50087dff494c94d288f380f4dfb1a22" Jan 21 15:52:54 crc kubenswrapper[4834]: I0121 15:52:54.178505 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:52:54 crc kubenswrapper[4834]: I0121 15:52:54.186336 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:52:54 crc kubenswrapper[4834]: I0121 15:52:54.340326 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="036c00d9-e7ff-4a65-98fc-79e3941cc5d9" path="/var/lib/kubelet/pods/036c00d9-e7ff-4a65-98fc-79e3941cc5d9/volumes" Jan 21 15:53:17 crc kubenswrapper[4834]: I0121 15:53:17.113476 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:53:17 crc kubenswrapper[4834]: I0121 15:53:17.114087 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:53:47 crc kubenswrapper[4834]: I0121 15:53:47.113725 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:53:47 crc kubenswrapper[4834]: I0121 15:53:47.114338 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:54:07 crc kubenswrapper[4834]: I0121 15:54:07.334274 4834 scope.go:117] "RemoveContainer" containerID="98b9cd38bbbcf3404d46767ddddd15aae4ef1a6dae3833317899242a09b1e2aa" Jan 21 15:54:17 crc kubenswrapper[4834]: I0121 15:54:17.115178 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:54:17 crc kubenswrapper[4834]: I0121 15:54:17.115676 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:54:17 crc kubenswrapper[4834]: I0121 15:54:17.115738 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 15:54:17 crc kubenswrapper[4834]: I0121 15:54:17.116952 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:54:17 crc kubenswrapper[4834]: I0121 15:54:17.117134 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" gracePeriod=600 Jan 21 15:54:17 crc kubenswrapper[4834]: E0121 15:54:17.264998 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:54:17 crc kubenswrapper[4834]: I0121 15:54:17.835487 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" exitCode=0 Jan 21 15:54:17 crc kubenswrapper[4834]: I0121 15:54:17.835540 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c"} Jan 21 15:54:17 crc kubenswrapper[4834]: I0121 15:54:17.835595 4834 scope.go:117] "RemoveContainer" containerID="dcb83c48b1306784ab7a9755c35c5dd73199c7ff7870be731322df946b6513c0" Jan 21 15:54:17 crc kubenswrapper[4834]: I0121 15:54:17.836323 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:54:17 crc kubenswrapper[4834]: E0121 15:54:17.836603 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:54:32 crc kubenswrapper[4834]: I0121 15:54:32.324842 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:54:32 crc kubenswrapper[4834]: E0121 15:54:32.325664 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:54:47 crc kubenswrapper[4834]: I0121 15:54:47.325665 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:54:47 crc kubenswrapper[4834]: E0121 15:54:47.326817 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:54:58 crc kubenswrapper[4834]: I0121 15:54:58.324464 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:54:58 crc kubenswrapper[4834]: E0121 15:54:58.325245 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:55:09 crc kubenswrapper[4834]: I0121 15:55:09.324135 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:55:09 crc kubenswrapper[4834]: E0121 15:55:09.324900 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:55:21 crc kubenswrapper[4834]: I0121 15:55:21.324794 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:55:21 crc kubenswrapper[4834]: E0121 15:55:21.325684 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:55:36 crc kubenswrapper[4834]: I0121 15:55:36.325265 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:55:36 crc kubenswrapper[4834]: E0121 15:55:36.326078 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:55:49 crc kubenswrapper[4834]: I0121 15:55:49.324875 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:55:49 crc kubenswrapper[4834]: E0121 15:55:49.325686 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:56:03 crc kubenswrapper[4834]: I0121 15:56:03.324746 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:56:03 crc kubenswrapper[4834]: E0121 15:56:03.325647 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:56:18 crc kubenswrapper[4834]: I0121 15:56:18.329497 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:56:18 crc kubenswrapper[4834]: E0121 15:56:18.331655 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:56:30 crc kubenswrapper[4834]: I0121 15:56:30.325062 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:56:30 crc kubenswrapper[4834]: E0121 15:56:30.326728 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:56:41 crc kubenswrapper[4834]: I0121 15:56:41.324539 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:56:41 crc kubenswrapper[4834]: E0121 15:56:41.325292 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:56:56 crc kubenswrapper[4834]: I0121 15:56:56.325330 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:56:56 crc kubenswrapper[4834]: E0121 15:56:56.326351 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:57:07 crc kubenswrapper[4834]: I0121 15:57:07.411887 4834 scope.go:117] "RemoveContainer" containerID="f1471b709271c1dbffd5ff03a6478079e7eedc9507798f8ff5824e9bde65e1e8" Jan 21 15:57:07 crc kubenswrapper[4834]: I0121 15:57:07.438089 4834 scope.go:117] "RemoveContainer" containerID="5021598e476a84b0b49220f8124f5ac19753b9c68f47d844be94ded9929c19ec" Jan 21 15:57:07 crc kubenswrapper[4834]: I0121 15:57:07.469950 4834 scope.go:117] "RemoveContainer" containerID="adb67ffbc84fa8163ca1ba426942a9db274c356395fab133474bf321f9ef74d1" Jan 21 15:57:09 crc kubenswrapper[4834]: I0121 15:57:09.325613 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:57:09 crc kubenswrapper[4834]: E0121 15:57:09.326099 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.175004 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 15:57:20 crc kubenswrapper[4834]: E0121 15:57:20.176342 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036c00d9-e7ff-4a65-98fc-79e3941cc5d9" containerName="mariadb-client" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.176387 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="036c00d9-e7ff-4a65-98fc-79e3941cc5d9" containerName="mariadb-client" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.176654 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="036c00d9-e7ff-4a65-98fc-79e3941cc5d9" containerName="mariadb-client" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.177610 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.181244 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qh25h" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.183666 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.304552 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-17002ac6-9284-4eb7-aa10-528612ff5920\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17002ac6-9284-4eb7-aa10-528612ff5920\") pod \"mariadb-copy-data\" (UID: \"f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf\") " pod="openstack/mariadb-copy-data" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.304636 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stpjx\" (UniqueName: \"kubernetes.io/projected/f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf-kube-api-access-stpjx\") pod \"mariadb-copy-data\" (UID: \"f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf\") " pod="openstack/mariadb-copy-data" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.405699 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-17002ac6-9284-4eb7-aa10-528612ff5920\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17002ac6-9284-4eb7-aa10-528612ff5920\") pod \"mariadb-copy-data\" (UID: \"f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf\") " pod="openstack/mariadb-copy-data" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.405804 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stpjx\" (UniqueName: \"kubernetes.io/projected/f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf-kube-api-access-stpjx\") pod \"mariadb-copy-data\" (UID: \"f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf\") " pod="openstack/mariadb-copy-data" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.409213 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.409255 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-17002ac6-9284-4eb7-aa10-528612ff5920\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17002ac6-9284-4eb7-aa10-528612ff5920\") pod \"mariadb-copy-data\" (UID: \"f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/710b8acb82c5b9eb8e6978bd677235d5d7f5d1de83d38ac76ac555b0358e4f7b/globalmount\"" pod="openstack/mariadb-copy-data" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.430894 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stpjx\" (UniqueName: \"kubernetes.io/projected/f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf-kube-api-access-stpjx\") pod \"mariadb-copy-data\" (UID: \"f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf\") " pod="openstack/mariadb-copy-data" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.455002 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-17002ac6-9284-4eb7-aa10-528612ff5920\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-17002ac6-9284-4eb7-aa10-528612ff5920\") pod \"mariadb-copy-data\" (UID: \"f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf\") " pod="openstack/mariadb-copy-data" Jan 21 15:57:20 crc kubenswrapper[4834]: I0121 15:57:20.507815 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 21 15:57:21 crc kubenswrapper[4834]: I0121 15:57:21.005536 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 21 15:57:21 crc kubenswrapper[4834]: I0121 15:57:21.324396 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:57:21 crc kubenswrapper[4834]: E0121 15:57:21.325365 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:57:21 crc kubenswrapper[4834]: I0121 15:57:21.431012 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf","Type":"ContainerStarted","Data":"9861d68517f4c686ec4fd9d74769f024d0045e15262abfba541e74dbf3d56c97"} Jan 21 15:57:21 crc kubenswrapper[4834]: I0121 15:57:21.431068 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf","Type":"ContainerStarted","Data":"8f9a2514ef560d421f205746f9a43c1620022354d7fc457391e534dea4e62aeb"} Jan 21 15:57:21 crc kubenswrapper[4834]: I0121 15:57:21.451852 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.451794994 podStartE2EDuration="2.451794994s" podCreationTimestamp="2026-01-21 15:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:57:21.443609599 +0000 UTC m=+5187.417958654" watchObservedRunningTime="2026-01-21 15:57:21.451794994 +0000 UTC m=+5187.426144049" Jan 21 15:57:24 crc kubenswrapper[4834]: I0121 15:57:24.214151 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:24 crc kubenswrapper[4834]: I0121 15:57:24.216381 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:57:24 crc kubenswrapper[4834]: I0121 15:57:24.232058 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:24 crc kubenswrapper[4834]: I0121 15:57:24.269364 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45cs\" (UniqueName: \"kubernetes.io/projected/e8ad8ff6-4e21-4ec4-9960-af447fe80f2b-kube-api-access-h45cs\") pod \"mariadb-client\" (UID: \"e8ad8ff6-4e21-4ec4-9960-af447fe80f2b\") " pod="openstack/mariadb-client" Jan 21 15:57:24 crc kubenswrapper[4834]: I0121 15:57:24.370635 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45cs\" (UniqueName: \"kubernetes.io/projected/e8ad8ff6-4e21-4ec4-9960-af447fe80f2b-kube-api-access-h45cs\") pod \"mariadb-client\" (UID: \"e8ad8ff6-4e21-4ec4-9960-af447fe80f2b\") " pod="openstack/mariadb-client" Jan 21 15:57:24 crc kubenswrapper[4834]: I0121 15:57:24.389335 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45cs\" (UniqueName: \"kubernetes.io/projected/e8ad8ff6-4e21-4ec4-9960-af447fe80f2b-kube-api-access-h45cs\") pod \"mariadb-client\" (UID: \"e8ad8ff6-4e21-4ec4-9960-af447fe80f2b\") " pod="openstack/mariadb-client" Jan 21 15:57:24 crc kubenswrapper[4834]: I0121 15:57:24.542838 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:57:24 crc kubenswrapper[4834]: I0121 15:57:24.995293 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:25 crc kubenswrapper[4834]: I0121 15:57:25.465058 4834 generic.go:334] "Generic (PLEG): container finished" podID="e8ad8ff6-4e21-4ec4-9960-af447fe80f2b" containerID="cc95a067551439a83e1cb08dbb0f94e548fee9c5265fd1cee1a72ba2e6ad87b9" exitCode=0 Jan 21 15:57:25 crc kubenswrapper[4834]: I0121 15:57:25.465117 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"e8ad8ff6-4e21-4ec4-9960-af447fe80f2b","Type":"ContainerDied","Data":"cc95a067551439a83e1cb08dbb0f94e548fee9c5265fd1cee1a72ba2e6ad87b9"} Jan 21 15:57:25 crc kubenswrapper[4834]: I0121 15:57:25.465343 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"e8ad8ff6-4e21-4ec4-9960-af447fe80f2b","Type":"ContainerStarted","Data":"24f152cebf1c076295da0f7528cedda5ecdf104ea6002b8d7cfc5a26afe8dd8d"} Jan 21 15:57:26 crc kubenswrapper[4834]: I0121 15:57:26.804001 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:57:26 crc kubenswrapper[4834]: I0121 15:57:26.830610 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_e8ad8ff6-4e21-4ec4-9960-af447fe80f2b/mariadb-client/0.log" Jan 21 15:57:26 crc kubenswrapper[4834]: I0121 15:57:26.851807 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:26 crc kubenswrapper[4834]: I0121 15:57:26.857467 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:26 crc kubenswrapper[4834]: I0121 15:57:26.909429 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h45cs\" (UniqueName: \"kubernetes.io/projected/e8ad8ff6-4e21-4ec4-9960-af447fe80f2b-kube-api-access-h45cs\") pod \"e8ad8ff6-4e21-4ec4-9960-af447fe80f2b\" (UID: \"e8ad8ff6-4e21-4ec4-9960-af447fe80f2b\") " Jan 21 15:57:26 crc kubenswrapper[4834]: I0121 15:57:26.916167 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ad8ff6-4e21-4ec4-9960-af447fe80f2b-kube-api-access-h45cs" (OuterVolumeSpecName: "kube-api-access-h45cs") pod "e8ad8ff6-4e21-4ec4-9960-af447fe80f2b" (UID: "e8ad8ff6-4e21-4ec4-9960-af447fe80f2b"). InnerVolumeSpecName "kube-api-access-h45cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:26.999879 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:27 crc kubenswrapper[4834]: E0121 15:57:27.000254 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ad8ff6-4e21-4ec4-9960-af447fe80f2b" containerName="mariadb-client" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.000279 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ad8ff6-4e21-4ec4-9960-af447fe80f2b" containerName="mariadb-client" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.000458 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ad8ff6-4e21-4ec4-9960-af447fe80f2b" containerName="mariadb-client" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.001057 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.011150 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h45cs\" (UniqueName: \"kubernetes.io/projected/e8ad8ff6-4e21-4ec4-9960-af447fe80f2b-kube-api-access-h45cs\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.017585 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.112070 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvtcr\" (UniqueName: \"kubernetes.io/projected/99410075-e531-465d-8953-5c2efb94c5d8-kube-api-access-fvtcr\") pod \"mariadb-client\" (UID: \"99410075-e531-465d-8953-5c2efb94c5d8\") " pod="openstack/mariadb-client" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.213161 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvtcr\" (UniqueName: \"kubernetes.io/projected/99410075-e531-465d-8953-5c2efb94c5d8-kube-api-access-fvtcr\") pod \"mariadb-client\" (UID: \"99410075-e531-465d-8953-5c2efb94c5d8\") " pod="openstack/mariadb-client" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.232756 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvtcr\" (UniqueName: \"kubernetes.io/projected/99410075-e531-465d-8953-5c2efb94c5d8-kube-api-access-fvtcr\") pod \"mariadb-client\" (UID: \"99410075-e531-465d-8953-5c2efb94c5d8\") " pod="openstack/mariadb-client" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.321007 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.488775 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24f152cebf1c076295da0f7528cedda5ecdf104ea6002b8d7cfc5a26afe8dd8d" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.488878 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.507921 4834 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="e8ad8ff6-4e21-4ec4-9960-af447fe80f2b" podUID="99410075-e531-465d-8953-5c2efb94c5d8" Jan 21 15:57:27 crc kubenswrapper[4834]: I0121 15:57:27.552464 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:28 crc kubenswrapper[4834]: I0121 15:57:28.337795 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ad8ff6-4e21-4ec4-9960-af447fe80f2b" path="/var/lib/kubelet/pods/e8ad8ff6-4e21-4ec4-9960-af447fe80f2b/volumes" Jan 21 15:57:28 crc kubenswrapper[4834]: I0121 15:57:28.498352 4834 generic.go:334] "Generic (PLEG): container finished" podID="99410075-e531-465d-8953-5c2efb94c5d8" containerID="3a0ce198ae3e6f6c3501779aba92e4b2e45b5bff91365d164b045d3979edef97" exitCode=0 Jan 21 15:57:28 crc kubenswrapper[4834]: I0121 15:57:28.498402 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"99410075-e531-465d-8953-5c2efb94c5d8","Type":"ContainerDied","Data":"3a0ce198ae3e6f6c3501779aba92e4b2e45b5bff91365d164b045d3979edef97"} Jan 21 15:57:28 crc kubenswrapper[4834]: I0121 15:57:28.498440 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"99410075-e531-465d-8953-5c2efb94c5d8","Type":"ContainerStarted","Data":"5fc6d20ab73747d8655d437ec88b73082b19409a8a2fbd5283ada649f90317c8"} Jan 21 15:57:29 crc kubenswrapper[4834]: I0121 15:57:29.901333 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:57:29 crc kubenswrapper[4834]: I0121 15:57:29.918639 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_99410075-e531-465d-8953-5c2efb94c5d8/mariadb-client/0.log" Jan 21 15:57:29 crc kubenswrapper[4834]: I0121 15:57:29.940182 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:29 crc kubenswrapper[4834]: I0121 15:57:29.945151 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 21 15:57:29 crc kubenswrapper[4834]: I0121 15:57:29.956697 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvtcr\" (UniqueName: \"kubernetes.io/projected/99410075-e531-465d-8953-5c2efb94c5d8-kube-api-access-fvtcr\") pod \"99410075-e531-465d-8953-5c2efb94c5d8\" (UID: \"99410075-e531-465d-8953-5c2efb94c5d8\") " Jan 21 15:57:29 crc kubenswrapper[4834]: I0121 15:57:29.969138 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99410075-e531-465d-8953-5c2efb94c5d8-kube-api-access-fvtcr" (OuterVolumeSpecName: "kube-api-access-fvtcr") pod "99410075-e531-465d-8953-5c2efb94c5d8" (UID: "99410075-e531-465d-8953-5c2efb94c5d8"). InnerVolumeSpecName "kube-api-access-fvtcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:30 crc kubenswrapper[4834]: I0121 15:57:30.058746 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvtcr\" (UniqueName: \"kubernetes.io/projected/99410075-e531-465d-8953-5c2efb94c5d8-kube-api-access-fvtcr\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:30 crc kubenswrapper[4834]: I0121 15:57:30.333279 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99410075-e531-465d-8953-5c2efb94c5d8" path="/var/lib/kubelet/pods/99410075-e531-465d-8953-5c2efb94c5d8/volumes" Jan 21 15:57:30 crc kubenswrapper[4834]: I0121 15:57:30.513342 4834 scope.go:117] "RemoveContainer" containerID="3a0ce198ae3e6f6c3501779aba92e4b2e45b5bff91365d164b045d3979edef97" Jan 21 15:57:30 crc kubenswrapper[4834]: I0121 15:57:30.513369 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 21 15:57:36 crc kubenswrapper[4834]: I0121 15:57:36.325076 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:57:36 crc kubenswrapper[4834]: E0121 15:57:36.326060 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:57:51 crc kubenswrapper[4834]: I0121 15:57:51.325883 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:57:51 crc kubenswrapper[4834]: E0121 15:57:51.326655 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:58:06 crc kubenswrapper[4834]: I0121 15:58:06.324513 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:58:06 crc kubenswrapper[4834]: E0121 15:58:06.325438 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.481545 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fln9n"] Jan 21 15:58:18 crc kubenswrapper[4834]: E0121 15:58:18.482693 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99410075-e531-465d-8953-5c2efb94c5d8" containerName="mariadb-client" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.482712 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="99410075-e531-465d-8953-5c2efb94c5d8" containerName="mariadb-client" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.482875 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="99410075-e531-465d-8953-5c2efb94c5d8" containerName="mariadb-client" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.484288 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.489357 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fln9n"] Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.495860 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-869m7\" (UniqueName: \"kubernetes.io/projected/6e6232b3-66ad-49ee-8646-c4e8319927af-kube-api-access-869m7\") pod \"redhat-operators-fln9n\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.495966 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-catalog-content\") pod \"redhat-operators-fln9n\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.496003 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-utilities\") pod \"redhat-operators-fln9n\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.598385 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-catalog-content\") pod \"redhat-operators-fln9n\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.598500 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-utilities\") pod \"redhat-operators-fln9n\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.598612 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-869m7\" (UniqueName: \"kubernetes.io/projected/6e6232b3-66ad-49ee-8646-c4e8319927af-kube-api-access-869m7\") pod \"redhat-operators-fln9n\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.599015 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-catalog-content\") pod \"redhat-operators-fln9n\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.599259 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-utilities\") pod \"redhat-operators-fln9n\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.627020 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-869m7\" (UniqueName: \"kubernetes.io/projected/6e6232b3-66ad-49ee-8646-c4e8319927af-kube-api-access-869m7\") pod \"redhat-operators-fln9n\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:18 crc kubenswrapper[4834]: I0121 15:58:18.807138 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:19 crc kubenswrapper[4834]: I0121 15:58:19.268518 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fln9n"] Jan 21 15:58:19 crc kubenswrapper[4834]: I0121 15:58:19.885418 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fln9n" event={"ID":"6e6232b3-66ad-49ee-8646-c4e8319927af","Type":"ContainerDied","Data":"0bc3ded297c7793c5a6441fbd324b07fe8f821e563d3926a0f05b9f99f9ea61a"} Jan 21 15:58:19 crc kubenswrapper[4834]: I0121 15:58:19.885254 4834 generic.go:334] "Generic (PLEG): container finished" podID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerID="0bc3ded297c7793c5a6441fbd324b07fe8f821e563d3926a0f05b9f99f9ea61a" exitCode=0 Jan 21 15:58:19 crc kubenswrapper[4834]: I0121 15:58:19.886317 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fln9n" event={"ID":"6e6232b3-66ad-49ee-8646-c4e8319927af","Type":"ContainerStarted","Data":"0e2d326593a69ab51fa7583e93473369c4784daa50c89b87a27fd339391e5849"} Jan 21 15:58:19 crc kubenswrapper[4834]: I0121 15:58:19.887869 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:58:21 crc kubenswrapper[4834]: I0121 15:58:21.324972 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:58:21 crc kubenswrapper[4834]: E0121 15:58:21.325722 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:58:21 crc kubenswrapper[4834]: I0121 15:58:21.907029 4834 generic.go:334] "Generic (PLEG): container finished" podID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerID="68d62d6218bd7c80124f11bf09171a99c20d7eed79c7032004f51aa3b0f1bf4c" exitCode=0 Jan 21 15:58:21 crc kubenswrapper[4834]: I0121 15:58:21.907095 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fln9n" event={"ID":"6e6232b3-66ad-49ee-8646-c4e8319927af","Type":"ContainerDied","Data":"68d62d6218bd7c80124f11bf09171a99c20d7eed79c7032004f51aa3b0f1bf4c"} Jan 21 15:58:22 crc kubenswrapper[4834]: I0121 15:58:22.918018 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fln9n" event={"ID":"6e6232b3-66ad-49ee-8646-c4e8319927af","Type":"ContainerStarted","Data":"8ab7c9a4dedd6d9bb00161fa99242b306c639745058eb2d1aad07ef7d937b1e7"} Jan 21 15:58:22 crc kubenswrapper[4834]: I0121 15:58:22.944376 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fln9n" podStartSLOduration=2.3628210259999998 podStartE2EDuration="4.944342787s" podCreationTimestamp="2026-01-21 15:58:18 +0000 UTC" firstStartedPulling="2026-01-21 15:58:19.887198292 +0000 UTC m=+5245.861547337" lastFinishedPulling="2026-01-21 15:58:22.468720063 +0000 UTC m=+5248.443069098" observedRunningTime="2026-01-21 15:58:22.936618566 +0000 UTC m=+5248.910967611" watchObservedRunningTime="2026-01-21 15:58:22.944342787 +0000 UTC m=+5248.918691832" Jan 21 15:58:28 crc kubenswrapper[4834]: I0121 15:58:28.807921 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:28 crc kubenswrapper[4834]: I0121 15:58:28.808453 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:28 crc kubenswrapper[4834]: I0121 15:58:28.854240 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:28 crc kubenswrapper[4834]: I0121 15:58:28.999567 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:29 crc kubenswrapper[4834]: I0121 15:58:29.088530 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fln9n"] Jan 21 15:58:30 crc kubenswrapper[4834]: I0121 15:58:30.975581 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fln9n" podUID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerName="registry-server" containerID="cri-o://8ab7c9a4dedd6d9bb00161fa99242b306c639745058eb2d1aad07ef7d937b1e7" gracePeriod=2 Jan 21 15:58:31 crc kubenswrapper[4834]: I0121 15:58:31.985803 4834 generic.go:334] "Generic (PLEG): container finished" podID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerID="8ab7c9a4dedd6d9bb00161fa99242b306c639745058eb2d1aad07ef7d937b1e7" exitCode=0 Jan 21 15:58:31 crc kubenswrapper[4834]: I0121 15:58:31.985869 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fln9n" event={"ID":"6e6232b3-66ad-49ee-8646-c4e8319927af","Type":"ContainerDied","Data":"8ab7c9a4dedd6d9bb00161fa99242b306c639745058eb2d1aad07ef7d937b1e7"} Jan 21 15:58:32 crc kubenswrapper[4834]: I0121 15:58:32.106117 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:32 crc kubenswrapper[4834]: I0121 15:58:32.220692 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-catalog-content\") pod \"6e6232b3-66ad-49ee-8646-c4e8319927af\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " Jan 21 15:58:32 crc kubenswrapper[4834]: I0121 15:58:32.220910 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-869m7\" (UniqueName: \"kubernetes.io/projected/6e6232b3-66ad-49ee-8646-c4e8319927af-kube-api-access-869m7\") pod \"6e6232b3-66ad-49ee-8646-c4e8319927af\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " Jan 21 15:58:32 crc kubenswrapper[4834]: I0121 15:58:32.221007 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-utilities\") pod \"6e6232b3-66ad-49ee-8646-c4e8319927af\" (UID: \"6e6232b3-66ad-49ee-8646-c4e8319927af\") " Jan 21 15:58:32 crc kubenswrapper[4834]: I0121 15:58:32.222558 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-utilities" (OuterVolumeSpecName: "utilities") pod "6e6232b3-66ad-49ee-8646-c4e8319927af" (UID: "6e6232b3-66ad-49ee-8646-c4e8319927af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:32 crc kubenswrapper[4834]: I0121 15:58:32.229242 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6232b3-66ad-49ee-8646-c4e8319927af-kube-api-access-869m7" (OuterVolumeSpecName: "kube-api-access-869m7") pod "6e6232b3-66ad-49ee-8646-c4e8319927af" (UID: "6e6232b3-66ad-49ee-8646-c4e8319927af"). InnerVolumeSpecName "kube-api-access-869m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:32 crc kubenswrapper[4834]: I0121 15:58:32.325238 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-869m7\" (UniqueName: \"kubernetes.io/projected/6e6232b3-66ad-49ee-8646-c4e8319927af-kube-api-access-869m7\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:32 crc kubenswrapper[4834]: I0121 15:58:32.325313 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:33 crc kubenswrapper[4834]: I0121 15:58:33.024286 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fln9n" event={"ID":"6e6232b3-66ad-49ee-8646-c4e8319927af","Type":"ContainerDied","Data":"0e2d326593a69ab51fa7583e93473369c4784daa50c89b87a27fd339391e5849"} Jan 21 15:58:33 crc kubenswrapper[4834]: I0121 15:58:33.024391 4834 scope.go:117] "RemoveContainer" containerID="8ab7c9a4dedd6d9bb00161fa99242b306c639745058eb2d1aad07ef7d937b1e7" Jan 21 15:58:33 crc kubenswrapper[4834]: I0121 15:58:33.024677 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fln9n" Jan 21 15:58:33 crc kubenswrapper[4834]: I0121 15:58:33.079970 4834 scope.go:117] "RemoveContainer" containerID="68d62d6218bd7c80124f11bf09171a99c20d7eed79c7032004f51aa3b0f1bf4c" Jan 21 15:58:33 crc kubenswrapper[4834]: I0121 15:58:33.129147 4834 scope.go:117] "RemoveContainer" containerID="0bc3ded297c7793c5a6441fbd324b07fe8f821e563d3926a0f05b9f99f9ea61a" Jan 21 15:58:33 crc kubenswrapper[4834]: I0121 15:58:33.680666 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e6232b3-66ad-49ee-8646-c4e8319927af" (UID: "6e6232b3-66ad-49ee-8646-c4e8319927af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:33 crc kubenswrapper[4834]: I0121 15:58:33.750182 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6232b3-66ad-49ee-8646-c4e8319927af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:33 crc kubenswrapper[4834]: I0121 15:58:33.955693 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fln9n"] Jan 21 15:58:33 crc kubenswrapper[4834]: I0121 15:58:33.964040 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fln9n"] Jan 21 15:58:34 crc kubenswrapper[4834]: I0121 15:58:34.336604 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6232b3-66ad-49ee-8646-c4e8319927af" path="/var/lib/kubelet/pods/6e6232b3-66ad-49ee-8646-c4e8319927af/volumes" Jan 21 15:58:36 crc kubenswrapper[4834]: I0121 15:58:36.324568 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:58:36 crc kubenswrapper[4834]: E0121 15:58:36.325134 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.585620 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:58:42 crc kubenswrapper[4834]: E0121 15:58:42.586703 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerName="extract-content" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.586737 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerName="extract-content" Jan 21 15:58:42 crc kubenswrapper[4834]: E0121 15:58:42.586769 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerName="registry-server" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.586781 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerName="registry-server" Jan 21 15:58:42 crc kubenswrapper[4834]: E0121 15:58:42.586807 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerName="extract-utilities" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.586820 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerName="extract-utilities" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.587097 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6232b3-66ad-49ee-8646-c4e8319927af" containerName="registry-server" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.588541 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.590655 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.590914 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.591357 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-lbt4c" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.597132 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.600909 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.617697 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.624269 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.645064 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.646892 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.653112 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.713964 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0c44c8-44bf-4166-a864-c2b993c2e042-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714031 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7dd5896-6634-4d8d-b95b-88138f750ca6-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714074 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7dd5896-6634-4d8d-b95b-88138f750ca6-config\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714153 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7dd5896-6634-4d8d-b95b-88138f750ca6-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714250 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7dd5896-6634-4d8d-b95b-88138f750ca6-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714279 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a0c44c8-44bf-4166-a864-c2b993c2e042-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714391 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a0c44c8-44bf-4166-a864-c2b993c2e042-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714523 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b5e5e74b-a221-4f10-bb40-da114081f0c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e5e74b-a221-4f10-bb40-da114081f0c0\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714604 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30586894-d62b-41e6-b0aa-32a52c74c2d4-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714657 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dd974927-a4c9-4dd5-be54-4f740bb8c65a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd974927-a4c9-4dd5-be54-4f740bb8c65a\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714687 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfqfn\" (UniqueName: \"kubernetes.io/projected/30586894-d62b-41e6-b0aa-32a52c74c2d4-kube-api-access-hfqfn\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714727 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30586894-d62b-41e6-b0aa-32a52c74c2d4-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714753 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30586894-d62b-41e6-b0aa-32a52c74c2d4-config\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714865 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f0255d23-e4c7-44c7-928e-39526a2b06f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0255d23-e4c7-44c7-928e-39526a2b06f8\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714903 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpl5w\" (UniqueName: \"kubernetes.io/projected/e7dd5896-6634-4d8d-b95b-88138f750ca6-kube-api-access-dpl5w\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.714960 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht2zf\" (UniqueName: \"kubernetes.io/projected/2a0c44c8-44bf-4166-a864-c2b993c2e042-kube-api-access-ht2zf\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.715010 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30586894-d62b-41e6-b0aa-32a52c74c2d4-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.715045 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0c44c8-44bf-4166-a864-c2b993c2e042-config\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.775985 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.777554 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.780431 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-62l46" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.780675 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.781092 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.796215 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.808968 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.810485 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.815951 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2693b793-2077-44df-bb32-bb5e800addde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2693b793-2077-44df-bb32-bb5e800addde\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816008 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7dd5896-6634-4d8d-b95b-88138f750ca6-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816040 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a0c44c8-44bf-4166-a864-c2b993c2e042-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816070 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bb6d32-c183-4cad-9cfa-6b40f80551d8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816090 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/70bb6d32-c183-4cad-9cfa-6b40f80551d8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816112 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a0c44c8-44bf-4166-a864-c2b993c2e042-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816147 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b5e5e74b-a221-4f10-bb40-da114081f0c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e5e74b-a221-4f10-bb40-da114081f0c0\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816181 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30586894-d62b-41e6-b0aa-32a52c74c2d4-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816208 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dd974927-a4c9-4dd5-be54-4f740bb8c65a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd974927-a4c9-4dd5-be54-4f740bb8c65a\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816227 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfqfn\" (UniqueName: \"kubernetes.io/projected/30586894-d62b-41e6-b0aa-32a52c74c2d4-kube-api-access-hfqfn\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816253 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30586894-d62b-41e6-b0aa-32a52c74c2d4-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816268 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t87xp\" (UniqueName: \"kubernetes.io/projected/70bb6d32-c183-4cad-9cfa-6b40f80551d8-kube-api-access-t87xp\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816303 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30586894-d62b-41e6-b0aa-32a52c74c2d4-config\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816340 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpl5w\" (UniqueName: \"kubernetes.io/projected/e7dd5896-6634-4d8d-b95b-88138f750ca6-kube-api-access-dpl5w\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816364 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f0255d23-e4c7-44c7-928e-39526a2b06f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0255d23-e4c7-44c7-928e-39526a2b06f8\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816389 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht2zf\" (UniqueName: \"kubernetes.io/projected/2a0c44c8-44bf-4166-a864-c2b993c2e042-kube-api-access-ht2zf\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816414 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30586894-d62b-41e6-b0aa-32a52c74c2d4-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816430 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0c44c8-44bf-4166-a864-c2b993c2e042-config\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816462 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70bb6d32-c183-4cad-9cfa-6b40f80551d8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816488 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0c44c8-44bf-4166-a864-c2b993c2e042-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816510 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7dd5896-6634-4d8d-b95b-88138f750ca6-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816540 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7dd5896-6634-4d8d-b95b-88138f750ca6-config\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816559 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7dd5896-6634-4d8d-b95b-88138f750ca6-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.816576 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70bb6d32-c183-4cad-9cfa-6b40f80551d8-config\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.817171 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30586894-d62b-41e6-b0aa-32a52c74c2d4-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.817505 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a0c44c8-44bf-4166-a864-c2b993c2e042-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.818154 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e7dd5896-6634-4d8d-b95b-88138f750ca6-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.818332 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30586894-d62b-41e6-b0aa-32a52c74c2d4-config\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.818510 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e7dd5896-6634-4d8d-b95b-88138f750ca6-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.818849 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7dd5896-6634-4d8d-b95b-88138f750ca6-config\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.818847 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.819388 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a0c44c8-44bf-4166-a864-c2b993c2e042-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.819990 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30586894-d62b-41e6-b0aa-32a52c74c2d4-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.820363 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0c44c8-44bf-4166-a864-c2b993c2e042-config\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.826073 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7dd5896-6634-4d8d-b95b-88138f750ca6-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.826628 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a0c44c8-44bf-4166-a864-c2b993c2e042-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.827092 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.827128 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dd974927-a4c9-4dd5-be54-4f740bb8c65a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd974927-a4c9-4dd5-be54-4f740bb8c65a\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/495246fb697a70a9956190ded8d0edb6cd1b862f0964510fbddbb56a74ce6346/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.828779 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.832543 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30586894-d62b-41e6-b0aa-32a52c74c2d4-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.840210 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.840267 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f0255d23-e4c7-44c7-928e-39526a2b06f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0255d23-e4c7-44c7-928e-39526a2b06f8\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/af54fd6d9d61e817896b07d1c29806f1fafa781d23b078707e64f8b639747e9e/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.841372 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.842270 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht2zf\" (UniqueName: \"kubernetes.io/projected/2a0c44c8-44bf-4166-a864-c2b993c2e042-kube-api-access-ht2zf\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.843129 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.843213 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b5e5e74b-a221-4f10-bb40-da114081f0c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e5e74b-a221-4f10-bb40-da114081f0c0\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/957eea37c026eee351b79fca6606a8fd50407e826e58f6e3aa41da349726804c/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.855270 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfqfn\" (UniqueName: \"kubernetes.io/projected/30586894-d62b-41e6-b0aa-32a52c74c2d4-kube-api-access-hfqfn\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.857835 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpl5w\" (UniqueName: \"kubernetes.io/projected/e7dd5896-6634-4d8d-b95b-88138f750ca6-kube-api-access-dpl5w\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.865144 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.897233 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b5e5e74b-a221-4f10-bb40-da114081f0c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5e5e74b-a221-4f10-bb40-da114081f0c0\") pod \"ovsdbserver-sb-1\" (UID: \"e7dd5896-6634-4d8d-b95b-88138f750ca6\") " pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.912884 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f0255d23-e4c7-44c7-928e-39526a2b06f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0255d23-e4c7-44c7-928e-39526a2b06f8\") pod \"ovsdbserver-sb-2\" (UID: \"30586894-d62b-41e6-b0aa-32a52c74c2d4\") " pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919003 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t87xp\" (UniqueName: \"kubernetes.io/projected/70bb6d32-c183-4cad-9cfa-6b40f80551d8-kube-api-access-t87xp\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919071 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3cf6365-be8d-41d7-a56e-03259a20a210-config\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919141 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ln7\" (UniqueName: \"kubernetes.io/projected/e3cf6365-be8d-41d7-a56e-03259a20a210-kube-api-access-k9ln7\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919194 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70bb6d32-c183-4cad-9cfa-6b40f80551d8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919227 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0fb7295b-8ef8-4a9a-bb40-0cdc02a6ea1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0fb7295b-8ef8-4a9a-bb40-0cdc02a6ea1a\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919268 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e3cf6365-be8d-41d7-a56e-03259a20a210-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919318 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70bb6d32-c183-4cad-9cfa-6b40f80551d8-config\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919358 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2693b793-2077-44df-bb32-bb5e800addde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2693b793-2077-44df-bb32-bb5e800addde\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919396 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2np9g\" (UniqueName: \"kubernetes.io/projected/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-kube-api-access-2np9g\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919461 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cf6365-be8d-41d7-a56e-03259a20a210-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919500 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919529 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-config\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919560 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919586 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bb6d32-c183-4cad-9cfa-6b40f80551d8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919626 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/70bb6d32-c183-4cad-9cfa-6b40f80551d8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919670 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919728 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3cf6365-be8d-41d7-a56e-03259a20a210-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.919803 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-318e6c46-7fcf-4f32-8304-401b2e10dd51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318e6c46-7fcf-4f32-8304-401b2e10dd51\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.923017 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70bb6d32-c183-4cad-9cfa-6b40f80551d8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.924533 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70bb6d32-c183-4cad-9cfa-6b40f80551d8-config\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.927603 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/70bb6d32-c183-4cad-9cfa-6b40f80551d8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.929662 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.929715 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2693b793-2077-44df-bb32-bb5e800addde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2693b793-2077-44df-bb32-bb5e800addde\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/108eeff724d6f0691ed67eb1d16ab4d7a760ef9d7a48e461e9d5931ef787505f/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.930536 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70bb6d32-c183-4cad-9cfa-6b40f80551d8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.939418 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t87xp\" (UniqueName: \"kubernetes.io/projected/70bb6d32-c183-4cad-9cfa-6b40f80551d8-kube-api-access-t87xp\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.943822 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.945613 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dd974927-a4c9-4dd5-be54-4f740bb8c65a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dd974927-a4c9-4dd5-be54-4f740bb8c65a\") pod \"ovsdbserver-sb-0\" (UID: \"2a0c44c8-44bf-4166-a864-c2b993c2e042\") " pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.962614 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:42 crc kubenswrapper[4834]: I0121 15:58:42.968829 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2693b793-2077-44df-bb32-bb5e800addde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2693b793-2077-44df-bb32-bb5e800addde\") pod \"ovsdbserver-nb-0\" (UID: \"70bb6d32-c183-4cad-9cfa-6b40f80551d8\") " pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.023913 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ln7\" (UniqueName: \"kubernetes.io/projected/e3cf6365-be8d-41d7-a56e-03259a20a210-kube-api-access-k9ln7\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024385 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0fb7295b-8ef8-4a9a-bb40-0cdc02a6ea1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0fb7295b-8ef8-4a9a-bb40-0cdc02a6ea1a\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024423 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e3cf6365-be8d-41d7-a56e-03259a20a210-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024483 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2np9g\" (UniqueName: \"kubernetes.io/projected/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-kube-api-access-2np9g\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024512 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cf6365-be8d-41d7-a56e-03259a20a210-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024531 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024554 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-config\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024573 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024616 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024665 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3cf6365-be8d-41d7-a56e-03259a20a210-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024716 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-318e6c46-7fcf-4f32-8304-401b2e10dd51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318e6c46-7fcf-4f32-8304-401b2e10dd51\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.024752 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3cf6365-be8d-41d7-a56e-03259a20a210-config\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.025915 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3cf6365-be8d-41d7-a56e-03259a20a210-config\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.025886 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-config\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.026890 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e3cf6365-be8d-41d7-a56e-03259a20a210-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.027604 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.027785 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.027821 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0fb7295b-8ef8-4a9a-bb40-0cdc02a6ea1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0fb7295b-8ef8-4a9a-bb40-0cdc02a6ea1a\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b086d359ee72742989aed86983c7f012cd24f5611107e2b3d10847e8cf6296f5/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.028091 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3cf6365-be8d-41d7-a56e-03259a20a210-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.028297 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.029755 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.029783 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-318e6c46-7fcf-4f32-8304-401b2e10dd51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318e6c46-7fcf-4f32-8304-401b2e10dd51\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2a58f856703ab1307df41be18ae611e02a336e43c66abe61d4ca5cff3c57e9e6/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.033148 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.034650 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cf6365-be8d-41d7-a56e-03259a20a210-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.049238 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ln7\" (UniqueName: \"kubernetes.io/projected/e3cf6365-be8d-41d7-a56e-03259a20a210-kube-api-access-k9ln7\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.049396 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2np9g\" (UniqueName: \"kubernetes.io/projected/3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3-kube-api-access-2np9g\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.062627 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-318e6c46-7fcf-4f32-8304-401b2e10dd51\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-318e6c46-7fcf-4f32-8304-401b2e10dd51\") pod \"ovsdbserver-nb-1\" (UID: \"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3\") " pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.080327 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0fb7295b-8ef8-4a9a-bb40-0cdc02a6ea1a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0fb7295b-8ef8-4a9a-bb40-0cdc02a6ea1a\") pod \"ovsdbserver-nb-2\" (UID: \"e3cf6365-be8d-41d7-a56e-03259a20a210\") " pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.091875 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.233882 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.307554 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.313538 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.502766 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 21 15:58:43 crc kubenswrapper[4834]: W0121 15:58:43.516415 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7dd5896_6634_4d8d_b95b_88138f750ca6.slice/crio-c5e5a3b032956430f7ffb87a97a23610b292338a20b30cde0c2c013c9806a8c5 WatchSource:0}: Error finding container c5e5a3b032956430f7ffb87a97a23610b292338a20b30cde0c2c013c9806a8c5: Status 404 returned error can't find the container with id c5e5a3b032956430f7ffb87a97a23610b292338a20b30cde0c2c013c9806a8c5 Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.612906 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 21 15:58:43 crc kubenswrapper[4834]: W0121 15:58:43.617362 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30586894_d62b_41e6_b0aa_32a52c74c2d4.slice/crio-c27bafb47e2bde2cb3e2fb94127d8fe35f7f4a032e3f9bc352e16d5d673d2505 WatchSource:0}: Error finding container c27bafb47e2bde2cb3e2fb94127d8fe35f7f4a032e3f9bc352e16d5d673d2505: Status 404 returned error can't find the container with id c27bafb47e2bde2cb3e2fb94127d8fe35f7f4a032e3f9bc352e16d5d673d2505 Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.705681 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 15:58:43 crc kubenswrapper[4834]: W0121 15:58:43.740794 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70bb6d32_c183_4cad_9cfa_6b40f80551d8.slice/crio-6b17a89f8ab5753f5255b581feae0cd1f54577747a24f605ff9ab2dfc1fb7ccc WatchSource:0}: Error finding container 6b17a89f8ab5753f5255b581feae0cd1f54577747a24f605ff9ab2dfc1fb7ccc: Status 404 returned error can't find the container with id 6b17a89f8ab5753f5255b581feae0cd1f54577747a24f605ff9ab2dfc1fb7ccc Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.804979 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 15:58:43 crc kubenswrapper[4834]: W0121 15:58:43.812674 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a0c44c8_44bf_4166_a864_c2b993c2e042.slice/crio-dc537a5f3f5ce7458ec08e83a2cd228ec4a177d2a09fc1d4735058b18b4c2b0e WatchSource:0}: Error finding container dc537a5f3f5ce7458ec08e83a2cd228ec4a177d2a09fc1d4735058b18b4c2b0e: Status 404 returned error can't find the container with id dc537a5f3f5ce7458ec08e83a2cd228ec4a177d2a09fc1d4735058b18b4c2b0e Jan 21 15:58:43 crc kubenswrapper[4834]: I0121 15:58:43.917178 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 21 15:58:43 crc kubenswrapper[4834]: W0121 15:58:43.922686 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d5f827c_3861_4d8d_a72b_7a9f0fe75ea3.slice/crio-420e46dacdbf6881f1c468bebb4e0c162494962d0394e32611d9855d06caf662 WatchSource:0}: Error finding container 420e46dacdbf6881f1c468bebb4e0c162494962d0394e32611d9855d06caf662: Status 404 returned error can't find the container with id 420e46dacdbf6881f1c468bebb4e0c162494962d0394e32611d9855d06caf662 Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.020477 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.114039 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"30586894-d62b-41e6-b0aa-32a52c74c2d4","Type":"ContainerStarted","Data":"c27bafb47e2bde2cb3e2fb94127d8fe35f7f4a032e3f9bc352e16d5d673d2505"} Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.115571 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"e7dd5896-6634-4d8d-b95b-88138f750ca6","Type":"ContainerStarted","Data":"3b54f286ef5c2bfb8bf4a838352e695b82092b8ef90a2c54eeaba3fd7d2628ff"} Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.115621 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"e7dd5896-6634-4d8d-b95b-88138f750ca6","Type":"ContainerStarted","Data":"c5e5a3b032956430f7ffb87a97a23610b292338a20b30cde0c2c013c9806a8c5"} Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.116526 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3","Type":"ContainerStarted","Data":"420e46dacdbf6881f1c468bebb4e0c162494962d0394e32611d9855d06caf662"} Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.117733 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"e3cf6365-be8d-41d7-a56e-03259a20a210","Type":"ContainerStarted","Data":"4e0e361f2a953e4246717840ed5e9ff981e0e3719c20443664e99b94162a461f"} Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.120334 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2a0c44c8-44bf-4166-a864-c2b993c2e042","Type":"ContainerStarted","Data":"dc537a5f3f5ce7458ec08e83a2cd228ec4a177d2a09fc1d4735058b18b4c2b0e"} Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.122060 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"70bb6d32-c183-4cad-9cfa-6b40f80551d8","Type":"ContainerStarted","Data":"6b17a89f8ab5753f5255b581feae0cd1f54577747a24f605ff9ab2dfc1fb7ccc"} Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.773127 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-whfw5"] Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.776114 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.792633 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whfw5"] Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.968423 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjcr\" (UniqueName: \"kubernetes.io/projected/e553ae15-fe83-47a9-84da-48e1f89bc5e3-kube-api-access-vwjcr\") pod \"community-operators-whfw5\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.968521 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-utilities\") pod \"community-operators-whfw5\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:44 crc kubenswrapper[4834]: I0121 15:58:44.968543 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-catalog-content\") pod \"community-operators-whfw5\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.070189 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-utilities\") pod \"community-operators-whfw5\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.070262 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-catalog-content\") pod \"community-operators-whfw5\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.070353 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjcr\" (UniqueName: \"kubernetes.io/projected/e553ae15-fe83-47a9-84da-48e1f89bc5e3-kube-api-access-vwjcr\") pod \"community-operators-whfw5\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.071693 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-utilities\") pod \"community-operators-whfw5\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.071740 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-catalog-content\") pod \"community-operators-whfw5\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.089542 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjcr\" (UniqueName: \"kubernetes.io/projected/e553ae15-fe83-47a9-84da-48e1f89bc5e3-kube-api-access-vwjcr\") pod \"community-operators-whfw5\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.130543 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"e7dd5896-6634-4d8d-b95b-88138f750ca6","Type":"ContainerStarted","Data":"780363b1f2607e565a9bd504a8c5ff5c220061054a40cbc041614d92c4a686c2"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.133055 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3","Type":"ContainerStarted","Data":"6af2faa2c7b465457362474c04dbc31f1b621f139c71a452b127250f5c40a783"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.133228 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3","Type":"ContainerStarted","Data":"77e6f31ece6bbbf416bc74326e116c49324e43c0615e47ceefe69d81ebc08669"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.134882 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"e3cf6365-be8d-41d7-a56e-03259a20a210","Type":"ContainerStarted","Data":"99725e715824b998d6b61d0b30e1f77b330734f3d8eee6776e6965b76ad3dd4c"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.134913 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"e3cf6365-be8d-41d7-a56e-03259a20a210","Type":"ContainerStarted","Data":"8eb1ed6b88e0c676715d4d965bec20e9195ee0cf17694ffa768e05398cac84e9"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.136749 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2a0c44c8-44bf-4166-a864-c2b993c2e042","Type":"ContainerStarted","Data":"3567bb8ad88c9a1e1daa5bfe216657878402939e3f70898e2ae5559304a3f28a"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.136864 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2a0c44c8-44bf-4166-a864-c2b993c2e042","Type":"ContainerStarted","Data":"7837432fa72c670ac5bc9246ae71a26bd025f215cefdc542ea167eeb32d8ac15"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.138104 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"70bb6d32-c183-4cad-9cfa-6b40f80551d8","Type":"ContainerStarted","Data":"03191cfc54c6a27188541383618336ddbc9c84daedc90872cae77db9182b059b"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.138126 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"70bb6d32-c183-4cad-9cfa-6b40f80551d8","Type":"ContainerStarted","Data":"acc204bb5954c4e92fd15cb32f0731f16790c0ec699405f4d5006cb81b473724"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.140585 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"30586894-d62b-41e6-b0aa-32a52c74c2d4","Type":"ContainerStarted","Data":"d319df34879963748f2c784c1162d5ad95a42b4ada57f2444c567cd9d53fe27f"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.140612 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"30586894-d62b-41e6-b0aa-32a52c74c2d4","Type":"ContainerStarted","Data":"d7cf2a4c72c6814103364b8fd74a44691bac305b1e7a0257e87496104521dbd5"} Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.154350 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.154331311 podStartE2EDuration="4.154331311s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:58:45.147663052 +0000 UTC m=+5271.122012117" watchObservedRunningTime="2026-01-21 15:58:45.154331311 +0000 UTC m=+5271.128680356" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.173376 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.173351244 podStartE2EDuration="4.173351244s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:58:45.165034725 +0000 UTC m=+5271.139383780" watchObservedRunningTime="2026-01-21 15:58:45.173351244 +0000 UTC m=+5271.147700299" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.187196 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.187172605 podStartE2EDuration="4.187172605s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:58:45.181425396 +0000 UTC m=+5271.155774471" watchObservedRunningTime="2026-01-21 15:58:45.187172605 +0000 UTC m=+5271.161521650" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.188309 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.205083 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.205065104 podStartE2EDuration="4.205065104s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:58:45.202151493 +0000 UTC m=+5271.176500538" watchObservedRunningTime="2026-01-21 15:58:45.205065104 +0000 UTC m=+5271.179414149" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.227021 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.226253075 podStartE2EDuration="4.226253075s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:58:45.225349338 +0000 UTC m=+5271.199698383" watchObservedRunningTime="2026-01-21 15:58:45.226253075 +0000 UTC m=+5271.200602120" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.253814 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.253789415 podStartE2EDuration="4.253789415s" podCreationTimestamp="2026-01-21 15:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:58:45.243571716 +0000 UTC m=+5271.217920771" watchObservedRunningTime="2026-01-21 15:58:45.253789415 +0000 UTC m=+5271.228138460" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.694153 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whfw5"] Jan 21 15:58:45 crc kubenswrapper[4834]: W0121 15:58:45.700631 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode553ae15_fe83_47a9_84da_48e1f89bc5e3.slice/crio-87f06a014aac186a6087ff4372929246c4672876579810e8778694bd90beb14c WatchSource:0}: Error finding container 87f06a014aac186a6087ff4372929246c4672876579810e8778694bd90beb14c: Status 404 returned error can't find the container with id 87f06a014aac186a6087ff4372929246c4672876579810e8778694bd90beb14c Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.944557 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:45 crc kubenswrapper[4834]: I0121 15:58:45.962805 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:46 crc kubenswrapper[4834]: I0121 15:58:46.093305 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:46 crc kubenswrapper[4834]: I0121 15:58:46.150083 4834 generic.go:334] "Generic (PLEG): container finished" podID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerID="5381c5bed68f4b27b38dce06e843ee63332e6eeddb06cfae96fcea4fe1b7bf7d" exitCode=0 Jan 21 15:58:46 crc kubenswrapper[4834]: I0121 15:58:46.151256 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whfw5" event={"ID":"e553ae15-fe83-47a9-84da-48e1f89bc5e3","Type":"ContainerDied","Data":"5381c5bed68f4b27b38dce06e843ee63332e6eeddb06cfae96fcea4fe1b7bf7d"} Jan 21 15:58:46 crc kubenswrapper[4834]: I0121 15:58:46.151297 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whfw5" event={"ID":"e553ae15-fe83-47a9-84da-48e1f89bc5e3","Type":"ContainerStarted","Data":"87f06a014aac186a6087ff4372929246c4672876579810e8778694bd90beb14c"} Jan 21 15:58:46 crc kubenswrapper[4834]: I0121 15:58:46.234367 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:46 crc kubenswrapper[4834]: I0121 15:58:46.307839 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:46 crc kubenswrapper[4834]: I0121 15:58:46.314103 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:47 crc kubenswrapper[4834]: I0121 15:58:47.172978 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whfw5" event={"ID":"e553ae15-fe83-47a9-84da-48e1f89bc5e3","Type":"ContainerStarted","Data":"71f04a9eedb584592b7133c4e876cd41da3fe76f6b56804210500581ee6c2b27"} Jan 21 15:58:47 crc kubenswrapper[4834]: I0121 15:58:47.325391 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:58:47 crc kubenswrapper[4834]: E0121 15:58:47.325746 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:58:47 crc kubenswrapper[4834]: I0121 15:58:47.944378 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:47 crc kubenswrapper[4834]: I0121 15:58:47.963706 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:48 crc kubenswrapper[4834]: I0121 15:58:48.093912 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:48 crc kubenswrapper[4834]: I0121 15:58:48.176141 4834 generic.go:334] "Generic (PLEG): container finished" podID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerID="71f04a9eedb584592b7133c4e876cd41da3fe76f6b56804210500581ee6c2b27" exitCode=0 Jan 21 15:58:48 crc kubenswrapper[4834]: I0121 15:58:48.176289 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whfw5" event={"ID":"e553ae15-fe83-47a9-84da-48e1f89bc5e3","Type":"ContainerDied","Data":"71f04a9eedb584592b7133c4e876cd41da3fe76f6b56804210500581ee6c2b27"} Jan 21 15:58:48 crc kubenswrapper[4834]: I0121 15:58:48.234499 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:48 crc kubenswrapper[4834]: I0121 15:58:48.308304 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:48 crc kubenswrapper[4834]: I0121 15:58:48.314489 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:48 crc kubenswrapper[4834]: I0121 15:58:48.989912 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.004759 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.131532 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.185838 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whfw5" event={"ID":"e553ae15-fe83-47a9-84da-48e1f89bc5e3","Type":"ContainerStarted","Data":"26c198e8af3d52f31e658d98a0d7f13fa0fd2f82ea09adc7c59609a4f3615cdd"} Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.207226 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-whfw5" podStartSLOduration=2.759452837 podStartE2EDuration="5.207202332s" podCreationTimestamp="2026-01-21 15:58:44 +0000 UTC" firstStartedPulling="2026-01-21 15:58:46.153617429 +0000 UTC m=+5272.127966464" lastFinishedPulling="2026-01-21 15:58:48.601366914 +0000 UTC m=+5274.575715959" observedRunningTime="2026-01-21 15:58:49.202748204 +0000 UTC m=+5275.177097259" watchObservedRunningTime="2026-01-21 15:58:49.207202332 +0000 UTC m=+5275.181551377" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.223075 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.231498 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.231803 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.299998 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.343518 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.360565 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.373450 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.446653 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.535839 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4fdcb6ff-thwm8"] Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.537155 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.539627 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.596334 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4fdcb6ff-thwm8"] Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.658223 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppk4x\" (UniqueName: \"kubernetes.io/projected/00d169b2-b7c6-417b-b27e-202401f03545-kube-api-access-ppk4x\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.658296 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-dns-svc\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.658698 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-config\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.658889 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-ovsdbserver-sb\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.759916 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-config\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.760012 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-ovsdbserver-sb\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.760061 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppk4x\" (UniqueName: \"kubernetes.io/projected/00d169b2-b7c6-417b-b27e-202401f03545-kube-api-access-ppk4x\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.760106 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-dns-svc\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.761093 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-dns-svc\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.761121 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-ovsdbserver-sb\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.761114 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-config\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.779222 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c4fdcb6ff-thwm8"] Jan 21 15:58:49 crc kubenswrapper[4834]: E0121 15:58:49.780793 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-ppk4x], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" podUID="00d169b2-b7c6-417b-b27e-202401f03545" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.788469 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppk4x\" (UniqueName: \"kubernetes.io/projected/00d169b2-b7c6-417b-b27e-202401f03545-kube-api-access-ppk4x\") pod \"dnsmasq-dns-c4fdcb6ff-thwm8\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.838814 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-577666b5dc-v7sz5"] Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.846013 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.852844 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.888554 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-577666b5dc-v7sz5"] Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.964613 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-nb\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.964671 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhq2\" (UniqueName: \"kubernetes.io/projected/8f4bbb26-78db-4a78-bb34-322259b6d35e-kube-api-access-jmhq2\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.964699 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-dns-svc\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.964893 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-config\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:49 crc kubenswrapper[4834]: I0121 15:58:49.965034 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-sb\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.066949 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-config\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.067020 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-sb\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.067099 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-nb\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.067139 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmhq2\" (UniqueName: \"kubernetes.io/projected/8f4bbb26-78db-4a78-bb34-322259b6d35e-kube-api-access-jmhq2\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.067166 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-dns-svc\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.067897 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-sb\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.067984 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-config\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.068284 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-dns-svc\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.068898 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-nb\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.083392 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmhq2\" (UniqueName: \"kubernetes.io/projected/8f4bbb26-78db-4a78-bb34-322259b6d35e-kube-api-access-jmhq2\") pod \"dnsmasq-dns-577666b5dc-v7sz5\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.187511 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.193514 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.205913 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.241687 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.371541 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-ovsdbserver-sb\") pod \"00d169b2-b7c6-417b-b27e-202401f03545\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.371721 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-config\") pod \"00d169b2-b7c6-417b-b27e-202401f03545\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.371831 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-dns-svc\") pod \"00d169b2-b7c6-417b-b27e-202401f03545\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.371862 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppk4x\" (UniqueName: \"kubernetes.io/projected/00d169b2-b7c6-417b-b27e-202401f03545-kube-api-access-ppk4x\") pod \"00d169b2-b7c6-417b-b27e-202401f03545\" (UID: \"00d169b2-b7c6-417b-b27e-202401f03545\") " Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.374430 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00d169b2-b7c6-417b-b27e-202401f03545" (UID: "00d169b2-b7c6-417b-b27e-202401f03545"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.374764 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-config" (OuterVolumeSpecName: "config") pod "00d169b2-b7c6-417b-b27e-202401f03545" (UID: "00d169b2-b7c6-417b-b27e-202401f03545"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.375111 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00d169b2-b7c6-417b-b27e-202401f03545" (UID: "00d169b2-b7c6-417b-b27e-202401f03545"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:58:50 crc kubenswrapper[4834]: I0121 15:58:50.380730 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d169b2-b7c6-417b-b27e-202401f03545-kube-api-access-ppk4x" (OuterVolumeSpecName: "kube-api-access-ppk4x") pod "00d169b2-b7c6-417b-b27e-202401f03545" (UID: "00d169b2-b7c6-417b-b27e-202401f03545"). InnerVolumeSpecName "kube-api-access-ppk4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:50.475089 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:50.475119 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:50.475133 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppk4x\" (UniqueName: \"kubernetes.io/projected/00d169b2-b7c6-417b-b27e-202401f03545-kube-api-access-ppk4x\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:50.475150 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d169b2-b7c6-417b-b27e-202401f03545-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:50.707701 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-577666b5dc-v7sz5"] Jan 21 15:58:51 crc kubenswrapper[4834]: W0121 15:58:50.708431 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f4bbb26_78db_4a78_bb34_322259b6d35e.slice/crio-c8127ef4aea3dccd74b98892889908496d6fe591ffa23e438bd01f04c1968495 WatchSource:0}: Error finding container c8127ef4aea3dccd74b98892889908496d6fe591ffa23e438bd01f04c1968495: Status 404 returned error can't find the container with id c8127ef4aea3dccd74b98892889908496d6fe591ffa23e438bd01f04c1968495 Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:51.202055 4834 generic.go:334] "Generic (PLEG): container finished" podID="8f4bbb26-78db-4a78-bb34-322259b6d35e" containerID="e59a0ac4af384677ce5ae626ec634756b3e6ee6245ef7f15f3b39d3b00becf3e" exitCode=0 Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:51.203554 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" event={"ID":"8f4bbb26-78db-4a78-bb34-322259b6d35e","Type":"ContainerDied","Data":"e59a0ac4af384677ce5ae626ec634756b3e6ee6245ef7f15f3b39d3b00becf3e"} Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:51.203579 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" event={"ID":"8f4bbb26-78db-4a78-bb34-322259b6d35e","Type":"ContainerStarted","Data":"c8127ef4aea3dccd74b98892889908496d6fe591ffa23e438bd01f04c1968495"} Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:51.203616 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4fdcb6ff-thwm8" Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:51.283727 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c4fdcb6ff-thwm8"] Jan 21 15:58:51 crc kubenswrapper[4834]: I0121 15:58:51.292975 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c4fdcb6ff-thwm8"] Jan 21 15:58:52 crc kubenswrapper[4834]: I0121 15:58:52.212793 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" event={"ID":"8f4bbb26-78db-4a78-bb34-322259b6d35e","Type":"ContainerStarted","Data":"09ceb7aa460369583e82475fe67466439c347b7b044b4272cf9354a778e1f523"} Jan 21 15:58:52 crc kubenswrapper[4834]: I0121 15:58:52.213120 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:58:52 crc kubenswrapper[4834]: I0121 15:58:52.333603 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d169b2-b7c6-417b-b27e-202401f03545" path="/var/lib/kubelet/pods/00d169b2-b7c6-417b-b27e-202401f03545/volumes" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.071606 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" podStartSLOduration=4.071586122 podStartE2EDuration="4.071586122s" podCreationTimestamp="2026-01-21 15:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:58:52.22982968 +0000 UTC m=+5278.204178725" watchObservedRunningTime="2026-01-21 15:58:53.071586122 +0000 UTC m=+5279.045935167" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.078598 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.079767 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.081875 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.086877 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.214775 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdcq\" (UniqueName: \"kubernetes.io/projected/da3ead55-f584-4ef3-aa5f-799e103b68db-kube-api-access-stdcq\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") " pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.215051 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a43324f-0ec5-4565-812d-20654458a3e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a43324f-0ec5-4565-812d-20654458a3e2\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") " pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.215126 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/da3ead55-f584-4ef3-aa5f-799e103b68db-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") " pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.316988 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9a43324f-0ec5-4565-812d-20654458a3e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a43324f-0ec5-4565-812d-20654458a3e2\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") " pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.317088 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/da3ead55-f584-4ef3-aa5f-799e103b68db-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") " pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.317144 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stdcq\" (UniqueName: \"kubernetes.io/projected/da3ead55-f584-4ef3-aa5f-799e103b68db-kube-api-access-stdcq\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") " pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.323554 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.323599 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9a43324f-0ec5-4565-812d-20654458a3e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a43324f-0ec5-4565-812d-20654458a3e2\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eabbfecd5f58b29c8784a4c9db36defd77d3665b330fb6b388246ba04d7d9773/globalmount\"" pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.333544 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/da3ead55-f584-4ef3-aa5f-799e103b68db-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") " pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.346254 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stdcq\" (UniqueName: \"kubernetes.io/projected/da3ead55-f584-4ef3-aa5f-799e103b68db-kube-api-access-stdcq\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") " pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.364850 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9a43324f-0ec5-4565-812d-20654458a3e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a43324f-0ec5-4565-812d-20654458a3e2\") pod \"ovn-copy-data\" (UID: \"da3ead55-f584-4ef3-aa5f-799e103b68db\") " pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.396121 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 21 15:58:53 crc kubenswrapper[4834]: I0121 15:58:53.983223 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 21 15:58:53 crc kubenswrapper[4834]: W0121 15:58:53.984677 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda3ead55_f584_4ef3_aa5f_799e103b68db.slice/crio-176ed8d5c6116ae79519b7c7052d8a2f4e8fe5f7256a96256c3b3267c2fc21ea WatchSource:0}: Error finding container 176ed8d5c6116ae79519b7c7052d8a2f4e8fe5f7256a96256c3b3267c2fc21ea: Status 404 returned error can't find the container with id 176ed8d5c6116ae79519b7c7052d8a2f4e8fe5f7256a96256c3b3267c2fc21ea Jan 21 15:58:54 crc kubenswrapper[4834]: I0121 15:58:54.230639 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"da3ead55-f584-4ef3-aa5f-799e103b68db","Type":"ContainerStarted","Data":"176ed8d5c6116ae79519b7c7052d8a2f4e8fe5f7256a96256c3b3267c2fc21ea"} Jan 21 15:58:55 crc kubenswrapper[4834]: I0121 15:58:55.188979 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:55 crc kubenswrapper[4834]: I0121 15:58:55.189382 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:55 crc kubenswrapper[4834]: I0121 15:58:55.231763 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:55 crc kubenswrapper[4834]: I0121 15:58:55.287800 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:55 crc kubenswrapper[4834]: I0121 15:58:55.467391 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whfw5"] Jan 21 15:58:56 crc kubenswrapper[4834]: I0121 15:58:56.246114 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"da3ead55-f584-4ef3-aa5f-799e103b68db","Type":"ContainerStarted","Data":"b47cc69cf8679fe0fa208b14544759cf9a4c67bcde13fca14a1b5387f2faf528"} Jan 21 15:58:56 crc kubenswrapper[4834]: I0121 15:58:56.271031 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.828748313 podStartE2EDuration="4.271005247s" podCreationTimestamp="2026-01-21 15:58:52 +0000 UTC" firstStartedPulling="2026-01-21 15:58:53.986972671 +0000 UTC m=+5279.961321716" lastFinishedPulling="2026-01-21 15:58:55.429229605 +0000 UTC m=+5281.403578650" observedRunningTime="2026-01-21 15:58:56.25927423 +0000 UTC m=+5282.233623275" watchObservedRunningTime="2026-01-21 15:58:56.271005247 +0000 UTC m=+5282.245354292" Jan 21 15:58:57 crc kubenswrapper[4834]: I0121 15:58:57.252874 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-whfw5" podUID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerName="registry-server" containerID="cri-o://26c198e8af3d52f31e658d98a0d7f13fa0fd2f82ea09adc7c59609a4f3615cdd" gracePeriod=2 Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.281394 4834 generic.go:334] "Generic (PLEG): container finished" podID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerID="26c198e8af3d52f31e658d98a0d7f13fa0fd2f82ea09adc7c59609a4f3615cdd" exitCode=0 Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.281482 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whfw5" event={"ID":"e553ae15-fe83-47a9-84da-48e1f89bc5e3","Type":"ContainerDied","Data":"26c198e8af3d52f31e658d98a0d7f13fa0fd2f82ea09adc7c59609a4f3615cdd"} Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.434154 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.608460 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-catalog-content\") pod \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.608874 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwjcr\" (UniqueName: \"kubernetes.io/projected/e553ae15-fe83-47a9-84da-48e1f89bc5e3-kube-api-access-vwjcr\") pod \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.608997 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-utilities\") pod \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\" (UID: \"e553ae15-fe83-47a9-84da-48e1f89bc5e3\") " Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.610103 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-utilities" (OuterVolumeSpecName: "utilities") pod "e553ae15-fe83-47a9-84da-48e1f89bc5e3" (UID: "e553ae15-fe83-47a9-84da-48e1f89bc5e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.620293 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e553ae15-fe83-47a9-84da-48e1f89bc5e3-kube-api-access-vwjcr" (OuterVolumeSpecName: "kube-api-access-vwjcr") pod "e553ae15-fe83-47a9-84da-48e1f89bc5e3" (UID: "e553ae15-fe83-47a9-84da-48e1f89bc5e3"). InnerVolumeSpecName "kube-api-access-vwjcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.664559 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e553ae15-fe83-47a9-84da-48e1f89bc5e3" (UID: "e553ae15-fe83-47a9-84da-48e1f89bc5e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.711069 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwjcr\" (UniqueName: \"kubernetes.io/projected/e553ae15-fe83-47a9-84da-48e1f89bc5e3-kube-api-access-vwjcr\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.711198 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:58 crc kubenswrapper[4834]: I0121 15:58:58.711220 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e553ae15-fe83-47a9-84da-48e1f89bc5e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:59 crc kubenswrapper[4834]: I0121 15:58:59.291128 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whfw5" event={"ID":"e553ae15-fe83-47a9-84da-48e1f89bc5e3","Type":"ContainerDied","Data":"87f06a014aac186a6087ff4372929246c4672876579810e8778694bd90beb14c"} Jan 21 15:58:59 crc kubenswrapper[4834]: I0121 15:58:59.291186 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whfw5" Jan 21 15:58:59 crc kubenswrapper[4834]: I0121 15:58:59.291203 4834 scope.go:117] "RemoveContainer" containerID="26c198e8af3d52f31e658d98a0d7f13fa0fd2f82ea09adc7c59609a4f3615cdd" Jan 21 15:58:59 crc kubenswrapper[4834]: I0121 15:58:59.314226 4834 scope.go:117] "RemoveContainer" containerID="71f04a9eedb584592b7133c4e876cd41da3fe76f6b56804210500581ee6c2b27" Jan 21 15:58:59 crc kubenswrapper[4834]: I0121 15:58:59.327715 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:58:59 crc kubenswrapper[4834]: E0121 15:58:59.328095 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:58:59 crc kubenswrapper[4834]: I0121 15:58:59.332979 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whfw5"] Jan 21 15:58:59 crc kubenswrapper[4834]: I0121 15:58:59.343325 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-whfw5"] Jan 21 15:58:59 crc kubenswrapper[4834]: I0121 15:58:59.360207 4834 scope.go:117] "RemoveContainer" containerID="5381c5bed68f4b27b38dce06e843ee63332e6eeddb06cfae96fcea4fe1b7bf7d" Jan 21 15:59:00 crc kubenswrapper[4834]: I0121 15:59:00.189892 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:59:00 crc kubenswrapper[4834]: I0121 15:59:00.253733 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8p798"] Jan 21 15:59:00 crc kubenswrapper[4834]: I0121 15:59:00.254118 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-8p798" podUID="2ae99f7a-abb1-41a2-9aca-179841af7226" containerName="dnsmasq-dns" containerID="cri-o://e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1" gracePeriod=10 Jan 21 15:59:00 crc kubenswrapper[4834]: I0121 15:59:00.339008 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" path="/var/lib/kubelet/pods/e553ae15-fe83-47a9-84da-48e1f89bc5e3/volumes" Jan 21 15:59:00 crc kubenswrapper[4834]: I0121 15:59:00.820881 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-699964fbc-8p798" podUID="2ae99f7a-abb1-41a2-9aca-179841af7226" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.248:5353: connect: connection refused" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.211370 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.317713 4834 generic.go:334] "Generic (PLEG): container finished" podID="2ae99f7a-abb1-41a2-9aca-179841af7226" containerID="e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1" exitCode=0 Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.317771 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8p798" event={"ID":"2ae99f7a-abb1-41a2-9aca-179841af7226","Type":"ContainerDied","Data":"e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1"} Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.317815 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8p798" event={"ID":"2ae99f7a-abb1-41a2-9aca-179841af7226","Type":"ContainerDied","Data":"412ff08abde6667919b126d64729a7131d0723ae17c2a674781218269e0993f6"} Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.317839 4834 scope.go:117] "RemoveContainer" containerID="e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.317839 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-8p798" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.342108 4834 scope.go:117] "RemoveContainer" containerID="78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.353264 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4h5g\" (UniqueName: \"kubernetes.io/projected/2ae99f7a-abb1-41a2-9aca-179841af7226-kube-api-access-q4h5g\") pod \"2ae99f7a-abb1-41a2-9aca-179841af7226\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.353368 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-dns-svc\") pod \"2ae99f7a-abb1-41a2-9aca-179841af7226\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.353401 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-config\") pod \"2ae99f7a-abb1-41a2-9aca-179841af7226\" (UID: \"2ae99f7a-abb1-41a2-9aca-179841af7226\") " Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.359232 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae99f7a-abb1-41a2-9aca-179841af7226-kube-api-access-q4h5g" (OuterVolumeSpecName: "kube-api-access-q4h5g") pod "2ae99f7a-abb1-41a2-9aca-179841af7226" (UID: "2ae99f7a-abb1-41a2-9aca-179841af7226"). InnerVolumeSpecName "kube-api-access-q4h5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.379970 4834 scope.go:117] "RemoveContainer" containerID="e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1" Jan 21 15:59:01 crc kubenswrapper[4834]: E0121 15:59:01.382691 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1\": container with ID starting with e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1 not found: ID does not exist" containerID="e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.382748 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1"} err="failed to get container status \"e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1\": rpc error: code = NotFound desc = could not find container \"e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1\": container with ID starting with e68d2217f76cd495ad1fc07e7807a776274ad6a2ecf9087763103a009fd7fef1 not found: ID does not exist" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.382786 4834 scope.go:117] "RemoveContainer" containerID="78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247" Jan 21 15:59:01 crc kubenswrapper[4834]: E0121 15:59:01.383159 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247\": container with ID starting with 78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247 not found: ID does not exist" containerID="78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.383203 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247"} err="failed to get container status \"78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247\": rpc error: code = NotFound desc = could not find container \"78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247\": container with ID starting with 78768f174cefc0151b5327d17d62e025630be68d4612c96fd9e8225e926e6247 not found: ID does not exist" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.401100 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ae99f7a-abb1-41a2-9aca-179841af7226" (UID: "2ae99f7a-abb1-41a2-9aca-179841af7226"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.403720 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-config" (OuterVolumeSpecName: "config") pod "2ae99f7a-abb1-41a2-9aca-179841af7226" (UID: "2ae99f7a-abb1-41a2-9aca-179841af7226"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.460006 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4h5g\" (UniqueName: \"kubernetes.io/projected/2ae99f7a-abb1-41a2-9aca-179841af7226-kube-api-access-q4h5g\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.460042 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.460053 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ae99f7a-abb1-41a2-9aca-179841af7226-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.499231 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:59:01 crc kubenswrapper[4834]: E0121 15:59:01.499600 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerName="extract-content" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.499620 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerName="extract-content" Jan 21 15:59:01 crc kubenswrapper[4834]: E0121 15:59:01.499633 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae99f7a-abb1-41a2-9aca-179841af7226" containerName="init" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.499639 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae99f7a-abb1-41a2-9aca-179841af7226" containerName="init" Jan 21 15:59:01 crc kubenswrapper[4834]: E0121 15:59:01.499650 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerName="registry-server" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.499657 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerName="registry-server" Jan 21 15:59:01 crc kubenswrapper[4834]: E0121 15:59:01.499668 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerName="extract-utilities" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.499674 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerName="extract-utilities" Jan 21 15:59:01 crc kubenswrapper[4834]: E0121 15:59:01.499695 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae99f7a-abb1-41a2-9aca-179841af7226" containerName="dnsmasq-dns" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.499700 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae99f7a-abb1-41a2-9aca-179841af7226" containerName="dnsmasq-dns" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.499838 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e553ae15-fe83-47a9-84da-48e1f89bc5e3" containerName="registry-server" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.499850 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae99f7a-abb1-41a2-9aca-179841af7226" containerName="dnsmasq-dns" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.500729 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.502637 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.503690 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.503751 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-frfd6" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.514645 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.650370 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8p798"] Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.658401 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8p798"] Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.663237 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.663346 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.663378 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5jb\" (UniqueName: \"kubernetes.io/projected/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-kube-api-access-zg5jb\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.663459 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-scripts\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.663501 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-config\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.765102 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.765168 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5jb\" (UniqueName: \"kubernetes.io/projected/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-kube-api-access-zg5jb\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.765259 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-scripts\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.765301 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-config\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.765368 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.765805 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.766251 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-config\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.766251 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-scripts\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.768689 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.782503 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5jb\" (UniqueName: \"kubernetes.io/projected/5775a7ad-6dc5-4cfb-8f00-302c15dedfac-kube-api-access-zg5jb\") pod \"ovn-northd-0\" (UID: \"5775a7ad-6dc5-4cfb-8f00-302c15dedfac\") " pod="openstack/ovn-northd-0" Jan 21 15:59:01 crc kubenswrapper[4834]: I0121 15:59:01.819622 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 15:59:02 crc kubenswrapper[4834]: I0121 15:59:02.296330 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 15:59:02 crc kubenswrapper[4834]: W0121 15:59:02.300056 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5775a7ad_6dc5_4cfb_8f00_302c15dedfac.slice/crio-518eb76f5e8be4ca4ab554771561a363c17643dfb2ba9253ed23d09d3ecba18d WatchSource:0}: Error finding container 518eb76f5e8be4ca4ab554771561a363c17643dfb2ba9253ed23d09d3ecba18d: Status 404 returned error can't find the container with id 518eb76f5e8be4ca4ab554771561a363c17643dfb2ba9253ed23d09d3ecba18d Jan 21 15:59:02 crc kubenswrapper[4834]: I0121 15:59:02.341154 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae99f7a-abb1-41a2-9aca-179841af7226" path="/var/lib/kubelet/pods/2ae99f7a-abb1-41a2-9aca-179841af7226/volumes" Jan 21 15:59:02 crc kubenswrapper[4834]: I0121 15:59:02.341870 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5775a7ad-6dc5-4cfb-8f00-302c15dedfac","Type":"ContainerStarted","Data":"518eb76f5e8be4ca4ab554771561a363c17643dfb2ba9253ed23d09d3ecba18d"} Jan 21 15:59:03 crc kubenswrapper[4834]: I0121 15:59:03.335881 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5775a7ad-6dc5-4cfb-8f00-302c15dedfac","Type":"ContainerStarted","Data":"209e592e988184e6f0f8b395bda41d7496231503fa378356898ada10551262aa"} Jan 21 15:59:03 crc kubenswrapper[4834]: I0121 15:59:03.336429 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5775a7ad-6dc5-4cfb-8f00-302c15dedfac","Type":"ContainerStarted","Data":"8bdaa21b0b61f473f8bad8f03e1fcab6bb7a2666bb8248f6ed84de16e17e01d9"} Jan 21 15:59:03 crc kubenswrapper[4834]: I0121 15:59:03.336493 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 15:59:03 crc kubenswrapper[4834]: I0121 15:59:03.369860 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.369834605 podStartE2EDuration="2.369834605s" podCreationTimestamp="2026-01-21 15:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:59:03.358657776 +0000 UTC m=+5289.333006861" watchObservedRunningTime="2026-01-21 15:59:03.369834605 +0000 UTC m=+5289.344183650" Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.768213 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-lnfv6"] Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.769886 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.779226 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lnfv6"] Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.850177 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q268b\" (UniqueName: \"kubernetes.io/projected/41e40b21-a32b-4aef-bcb1-5b5187d68abc-kube-api-access-q268b\") pod \"keystone-db-create-lnfv6\" (UID: \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\") " pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.850858 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e40b21-a32b-4aef-bcb1-5b5187d68abc-operator-scripts\") pod \"keystone-db-create-lnfv6\" (UID: \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\") " pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.878792 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ff41-account-create-update-zr2nl"] Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.880099 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.883470 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.888719 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ff41-account-create-update-zr2nl"] Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.952721 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e40b21-a32b-4aef-bcb1-5b5187d68abc-operator-scripts\") pod \"keystone-db-create-lnfv6\" (UID: \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\") " pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.952810 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q268b\" (UniqueName: \"kubernetes.io/projected/41e40b21-a32b-4aef-bcb1-5b5187d68abc-kube-api-access-q268b\") pod \"keystone-db-create-lnfv6\" (UID: \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\") " pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.954038 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e40b21-a32b-4aef-bcb1-5b5187d68abc-operator-scripts\") pod \"keystone-db-create-lnfv6\" (UID: \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\") " pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:06 crc kubenswrapper[4834]: I0121 15:59:06.976649 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q268b\" (UniqueName: \"kubernetes.io/projected/41e40b21-a32b-4aef-bcb1-5b5187d68abc-kube-api-access-q268b\") pod \"keystone-db-create-lnfv6\" (UID: \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\") " pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.054795 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1692378f-f497-4913-b9e9-59e40af2100b-operator-scripts\") pod \"keystone-ff41-account-create-update-zr2nl\" (UID: \"1692378f-f497-4913-b9e9-59e40af2100b\") " pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.055249 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xdvd\" (UniqueName: \"kubernetes.io/projected/1692378f-f497-4913-b9e9-59e40af2100b-kube-api-access-9xdvd\") pod \"keystone-ff41-account-create-update-zr2nl\" (UID: \"1692378f-f497-4913-b9e9-59e40af2100b\") " pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.145145 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.157041 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xdvd\" (UniqueName: \"kubernetes.io/projected/1692378f-f497-4913-b9e9-59e40af2100b-kube-api-access-9xdvd\") pod \"keystone-ff41-account-create-update-zr2nl\" (UID: \"1692378f-f497-4913-b9e9-59e40af2100b\") " pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.157136 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1692378f-f497-4913-b9e9-59e40af2100b-operator-scripts\") pod \"keystone-ff41-account-create-update-zr2nl\" (UID: \"1692378f-f497-4913-b9e9-59e40af2100b\") " pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.157846 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1692378f-f497-4913-b9e9-59e40af2100b-operator-scripts\") pod \"keystone-ff41-account-create-update-zr2nl\" (UID: \"1692378f-f497-4913-b9e9-59e40af2100b\") " pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.174774 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xdvd\" (UniqueName: \"kubernetes.io/projected/1692378f-f497-4913-b9e9-59e40af2100b-kube-api-access-9xdvd\") pod \"keystone-ff41-account-create-update-zr2nl\" (UID: \"1692378f-f497-4913-b9e9-59e40af2100b\") " pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.204109 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.600888 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lnfv6"] Jan 21 15:59:07 crc kubenswrapper[4834]: W0121 15:59:07.609040 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41e40b21_a32b_4aef_bcb1_5b5187d68abc.slice/crio-327892edb3f260d6af22f19eab963eefab21540b1c4c617ea1696a958c167338 WatchSource:0}: Error finding container 327892edb3f260d6af22f19eab963eefab21540b1c4c617ea1696a958c167338: Status 404 returned error can't find the container with id 327892edb3f260d6af22f19eab963eefab21540b1c4c617ea1696a958c167338 Jan 21 15:59:07 crc kubenswrapper[4834]: I0121 15:59:07.683602 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ff41-account-create-update-zr2nl"] Jan 21 15:59:07 crc kubenswrapper[4834]: W0121 15:59:07.687400 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1692378f_f497_4913_b9e9_59e40af2100b.slice/crio-141a1b8e596c83b815b2fd7da90f35f74b2c39fad53ebf8a0a073101e9957a42 WatchSource:0}: Error finding container 141a1b8e596c83b815b2fd7da90f35f74b2c39fad53ebf8a0a073101e9957a42: Status 404 returned error can't find the container with id 141a1b8e596c83b815b2fd7da90f35f74b2c39fad53ebf8a0a073101e9957a42 Jan 21 15:59:08 crc kubenswrapper[4834]: I0121 15:59:08.374154 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lnfv6" event={"ID":"41e40b21-a32b-4aef-bcb1-5b5187d68abc","Type":"ContainerStarted","Data":"9be0980a5078844f42c26f612b30795e1eba97e238949612ba5946b8805399c1"} Jan 21 15:59:08 crc kubenswrapper[4834]: I0121 15:59:08.374221 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lnfv6" event={"ID":"41e40b21-a32b-4aef-bcb1-5b5187d68abc","Type":"ContainerStarted","Data":"327892edb3f260d6af22f19eab963eefab21540b1c4c617ea1696a958c167338"} Jan 21 15:59:08 crc kubenswrapper[4834]: I0121 15:59:08.376742 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ff41-account-create-update-zr2nl" event={"ID":"1692378f-f497-4913-b9e9-59e40af2100b","Type":"ContainerStarted","Data":"2e572f993d20c359be8a8ae202a2aa3a052918ace0b71cc1efb00dc08d908beb"} Jan 21 15:59:08 crc kubenswrapper[4834]: I0121 15:59:08.376781 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ff41-account-create-update-zr2nl" event={"ID":"1692378f-f497-4913-b9e9-59e40af2100b","Type":"ContainerStarted","Data":"141a1b8e596c83b815b2fd7da90f35f74b2c39fad53ebf8a0a073101e9957a42"} Jan 21 15:59:08 crc kubenswrapper[4834]: I0121 15:59:08.390376 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-lnfv6" podStartSLOduration=2.390360127 podStartE2EDuration="2.390360127s" podCreationTimestamp="2026-01-21 15:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:59:08.388128628 +0000 UTC m=+5294.362477673" watchObservedRunningTime="2026-01-21 15:59:08.390360127 +0000 UTC m=+5294.364709172" Jan 21 15:59:08 crc kubenswrapper[4834]: I0121 15:59:08.404364 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-ff41-account-create-update-zr2nl" podStartSLOduration=2.404346533 podStartE2EDuration="2.404346533s" podCreationTimestamp="2026-01-21 15:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:59:08.404311972 +0000 UTC m=+5294.378661057" watchObservedRunningTime="2026-01-21 15:59:08.404346533 +0000 UTC m=+5294.378695568" Jan 21 15:59:09 crc kubenswrapper[4834]: I0121 15:59:09.386372 4834 generic.go:334] "Generic (PLEG): container finished" podID="1692378f-f497-4913-b9e9-59e40af2100b" containerID="2e572f993d20c359be8a8ae202a2aa3a052918ace0b71cc1efb00dc08d908beb" exitCode=0 Jan 21 15:59:09 crc kubenswrapper[4834]: I0121 15:59:09.386442 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ff41-account-create-update-zr2nl" event={"ID":"1692378f-f497-4913-b9e9-59e40af2100b","Type":"ContainerDied","Data":"2e572f993d20c359be8a8ae202a2aa3a052918ace0b71cc1efb00dc08d908beb"} Jan 21 15:59:09 crc kubenswrapper[4834]: I0121 15:59:09.389297 4834 generic.go:334] "Generic (PLEG): container finished" podID="41e40b21-a32b-4aef-bcb1-5b5187d68abc" containerID="9be0980a5078844f42c26f612b30795e1eba97e238949612ba5946b8805399c1" exitCode=0 Jan 21 15:59:09 crc kubenswrapper[4834]: I0121 15:59:09.389347 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lnfv6" event={"ID":"41e40b21-a32b-4aef-bcb1-5b5187d68abc","Type":"ContainerDied","Data":"9be0980a5078844f42c26f612b30795e1eba97e238949612ba5946b8805399c1"} Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.838111 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.844409 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.931379 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1692378f-f497-4913-b9e9-59e40af2100b-operator-scripts\") pod \"1692378f-f497-4913-b9e9-59e40af2100b\" (UID: \"1692378f-f497-4913-b9e9-59e40af2100b\") " Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.931489 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q268b\" (UniqueName: \"kubernetes.io/projected/41e40b21-a32b-4aef-bcb1-5b5187d68abc-kube-api-access-q268b\") pod \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\" (UID: \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\") " Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.931550 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e40b21-a32b-4aef-bcb1-5b5187d68abc-operator-scripts\") pod \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\" (UID: \"41e40b21-a32b-4aef-bcb1-5b5187d68abc\") " Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.931577 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xdvd\" (UniqueName: \"kubernetes.io/projected/1692378f-f497-4913-b9e9-59e40af2100b-kube-api-access-9xdvd\") pod \"1692378f-f497-4913-b9e9-59e40af2100b\" (UID: \"1692378f-f497-4913-b9e9-59e40af2100b\") " Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.932043 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1692378f-f497-4913-b9e9-59e40af2100b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1692378f-f497-4913-b9e9-59e40af2100b" (UID: "1692378f-f497-4913-b9e9-59e40af2100b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.932460 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41e40b21-a32b-4aef-bcb1-5b5187d68abc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41e40b21-a32b-4aef-bcb1-5b5187d68abc" (UID: "41e40b21-a32b-4aef-bcb1-5b5187d68abc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.938202 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1692378f-f497-4913-b9e9-59e40af2100b-kube-api-access-9xdvd" (OuterVolumeSpecName: "kube-api-access-9xdvd") pod "1692378f-f497-4913-b9e9-59e40af2100b" (UID: "1692378f-f497-4913-b9e9-59e40af2100b"). InnerVolumeSpecName "kube-api-access-9xdvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:10 crc kubenswrapper[4834]: I0121 15:59:10.938264 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41e40b21-a32b-4aef-bcb1-5b5187d68abc-kube-api-access-q268b" (OuterVolumeSpecName: "kube-api-access-q268b") pod "41e40b21-a32b-4aef-bcb1-5b5187d68abc" (UID: "41e40b21-a32b-4aef-bcb1-5b5187d68abc"). InnerVolumeSpecName "kube-api-access-q268b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.034268 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1692378f-f497-4913-b9e9-59e40af2100b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.034313 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q268b\" (UniqueName: \"kubernetes.io/projected/41e40b21-a32b-4aef-bcb1-5b5187d68abc-kube-api-access-q268b\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.034327 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e40b21-a32b-4aef-bcb1-5b5187d68abc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.034338 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xdvd\" (UniqueName: \"kubernetes.io/projected/1692378f-f497-4913-b9e9-59e40af2100b-kube-api-access-9xdvd\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.406270 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ff41-account-create-update-zr2nl" event={"ID":"1692378f-f497-4913-b9e9-59e40af2100b","Type":"ContainerDied","Data":"141a1b8e596c83b815b2fd7da90f35f74b2c39fad53ebf8a0a073101e9957a42"} Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.406349 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="141a1b8e596c83b815b2fd7da90f35f74b2c39fad53ebf8a0a073101e9957a42" Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.406458 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ff41-account-create-update-zr2nl" Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.414342 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lnfv6" event={"ID":"41e40b21-a32b-4aef-bcb1-5b5187d68abc","Type":"ContainerDied","Data":"327892edb3f260d6af22f19eab963eefab21540b1c4c617ea1696a958c167338"} Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.414414 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="327892edb3f260d6af22f19eab963eefab21540b1c4c617ea1696a958c167338" Jan 21 15:59:11 crc kubenswrapper[4834]: I0121 15:59:11.414497 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lnfv6" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.398300 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-mw4wp"] Jan 21 15:59:12 crc kubenswrapper[4834]: E0121 15:59:12.399767 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1692378f-f497-4913-b9e9-59e40af2100b" containerName="mariadb-account-create-update" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.399880 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1692378f-f497-4913-b9e9-59e40af2100b" containerName="mariadb-account-create-update" Jan 21 15:59:12 crc kubenswrapper[4834]: E0121 15:59:12.400027 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e40b21-a32b-4aef-bcb1-5b5187d68abc" containerName="mariadb-database-create" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.400114 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e40b21-a32b-4aef-bcb1-5b5187d68abc" containerName="mariadb-database-create" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.400380 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="1692378f-f497-4913-b9e9-59e40af2100b" containerName="mariadb-account-create-update" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.400472 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="41e40b21-a32b-4aef-bcb1-5b5187d68abc" containerName="mariadb-database-create" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.401310 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.407166 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.407299 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.407718 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.407906 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pjpmd" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.408802 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mw4wp"] Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.561910 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmn25\" (UniqueName: \"kubernetes.io/projected/98b5efbb-2116-40d4-8c4f-59a93f198024-kube-api-access-vmn25\") pod \"keystone-db-sync-mw4wp\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.562131 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-config-data\") pod \"keystone-db-sync-mw4wp\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.562227 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-combined-ca-bundle\") pod \"keystone-db-sync-mw4wp\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.663825 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmn25\" (UniqueName: \"kubernetes.io/projected/98b5efbb-2116-40d4-8c4f-59a93f198024-kube-api-access-vmn25\") pod \"keystone-db-sync-mw4wp\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.663963 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-config-data\") pod \"keystone-db-sync-mw4wp\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.664863 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-combined-ca-bundle\") pod \"keystone-db-sync-mw4wp\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.669769 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-combined-ca-bundle\") pod \"keystone-db-sync-mw4wp\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.669874 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-config-data\") pod \"keystone-db-sync-mw4wp\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.692305 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmn25\" (UniqueName: \"kubernetes.io/projected/98b5efbb-2116-40d4-8c4f-59a93f198024-kube-api-access-vmn25\") pod \"keystone-db-sync-mw4wp\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:12 crc kubenswrapper[4834]: I0121 15:59:12.720391 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:13 crc kubenswrapper[4834]: I0121 15:59:13.422992 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:59:13 crc kubenswrapper[4834]: E0121 15:59:13.431362 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 15:59:13 crc kubenswrapper[4834]: I0121 15:59:13.437637 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-mw4wp"] Jan 21 15:59:14 crc kubenswrapper[4834]: I0121 15:59:14.448747 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mw4wp" event={"ID":"98b5efbb-2116-40d4-8c4f-59a93f198024","Type":"ContainerStarted","Data":"f2ca19dde3c8bbb3dde0d9524f833fa10a93114b991abbcb19fd1146bb8644b3"} Jan 21 15:59:14 crc kubenswrapper[4834]: I0121 15:59:14.449504 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mw4wp" event={"ID":"98b5efbb-2116-40d4-8c4f-59a93f198024","Type":"ContainerStarted","Data":"ca9290abc6921aed2821415622b44de4b55adbeec47d761893226604d20c59b0"} Jan 21 15:59:14 crc kubenswrapper[4834]: I0121 15:59:14.479377 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-mw4wp" podStartSLOduration=2.479347666 podStartE2EDuration="2.479347666s" podCreationTimestamp="2026-01-21 15:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:59:14.468267741 +0000 UTC m=+5300.442616806" watchObservedRunningTime="2026-01-21 15:59:14.479347666 +0000 UTC m=+5300.453696721" Jan 21 15:59:16 crc kubenswrapper[4834]: I0121 15:59:16.465365 4834 generic.go:334] "Generic (PLEG): container finished" podID="98b5efbb-2116-40d4-8c4f-59a93f198024" containerID="f2ca19dde3c8bbb3dde0d9524f833fa10a93114b991abbcb19fd1146bb8644b3" exitCode=0 Jan 21 15:59:16 crc kubenswrapper[4834]: I0121 15:59:16.465392 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mw4wp" event={"ID":"98b5efbb-2116-40d4-8c4f-59a93f198024","Type":"ContainerDied","Data":"f2ca19dde3c8bbb3dde0d9524f833fa10a93114b991abbcb19fd1146bb8644b3"} Jan 21 15:59:16 crc kubenswrapper[4834]: I0121 15:59:16.877032 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 15:59:17 crc kubenswrapper[4834]: I0121 15:59:17.781759 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:17 crc kubenswrapper[4834]: I0121 15:59:17.904760 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-config-data\") pod \"98b5efbb-2116-40d4-8c4f-59a93f198024\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " Jan 21 15:59:17 crc kubenswrapper[4834]: I0121 15:59:17.904949 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmn25\" (UniqueName: \"kubernetes.io/projected/98b5efbb-2116-40d4-8c4f-59a93f198024-kube-api-access-vmn25\") pod \"98b5efbb-2116-40d4-8c4f-59a93f198024\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " Jan 21 15:59:17 crc kubenswrapper[4834]: I0121 15:59:17.904996 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-combined-ca-bundle\") pod \"98b5efbb-2116-40d4-8c4f-59a93f198024\" (UID: \"98b5efbb-2116-40d4-8c4f-59a93f198024\") " Jan 21 15:59:17 crc kubenswrapper[4834]: I0121 15:59:17.909855 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b5efbb-2116-40d4-8c4f-59a93f198024-kube-api-access-vmn25" (OuterVolumeSpecName: "kube-api-access-vmn25") pod "98b5efbb-2116-40d4-8c4f-59a93f198024" (UID: "98b5efbb-2116-40d4-8c4f-59a93f198024"). InnerVolumeSpecName "kube-api-access-vmn25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:17 crc kubenswrapper[4834]: I0121 15:59:17.925901 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98b5efbb-2116-40d4-8c4f-59a93f198024" (UID: "98b5efbb-2116-40d4-8c4f-59a93f198024"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:17 crc kubenswrapper[4834]: I0121 15:59:17.944182 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-config-data" (OuterVolumeSpecName: "config-data") pod "98b5efbb-2116-40d4-8c4f-59a93f198024" (UID: "98b5efbb-2116-40d4-8c4f-59a93f198024"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.006721 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmn25\" (UniqueName: \"kubernetes.io/projected/98b5efbb-2116-40d4-8c4f-59a93f198024-kube-api-access-vmn25\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.006760 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.006777 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b5efbb-2116-40d4-8c4f-59a93f198024-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.483754 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-mw4wp" event={"ID":"98b5efbb-2116-40d4-8c4f-59a93f198024","Type":"ContainerDied","Data":"ca9290abc6921aed2821415622b44de4b55adbeec47d761893226604d20c59b0"} Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.483792 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca9290abc6921aed2821415622b44de4b55adbeec47d761893226604d20c59b0" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.483797 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-mw4wp" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.729162 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59f4bfbbb7-vm62h"] Jan 21 15:59:18 crc kubenswrapper[4834]: E0121 15:59:18.729557 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b5efbb-2116-40d4-8c4f-59a93f198024" containerName="keystone-db-sync" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.729584 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b5efbb-2116-40d4-8c4f-59a93f198024" containerName="keystone-db-sync" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.729793 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b5efbb-2116-40d4-8c4f-59a93f198024" containerName="keystone-db-sync" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.743207 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59f4bfbbb7-vm62h"] Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.743352 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.785569 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tp9sg"] Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.786560 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.793415 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.794488 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pjpmd" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.794756 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.794901 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.795083 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.796747 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tp9sg"] Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922398 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-sb\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922455 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-fernet-keys\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922498 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grmsw\" (UniqueName: \"kubernetes.io/projected/40a545af-34aa-4200-8e15-ee9b364da472-kube-api-access-grmsw\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922536 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-combined-ca-bundle\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922559 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-config-data\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922595 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-credential-keys\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922659 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-config\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922686 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-dns-svc\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922727 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-scripts\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.922914 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-nb\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:18 crc kubenswrapper[4834]: I0121 15:59:18.923016 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cbwv\" (UniqueName: \"kubernetes.io/projected/46de4732-9733-4bf9-953a-9d62441b191d-kube-api-access-4cbwv\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024685 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-credential-keys\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024739 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-config\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024761 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-dns-svc\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024800 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-scripts\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024830 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-nb\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024848 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cbwv\" (UniqueName: \"kubernetes.io/projected/46de4732-9733-4bf9-953a-9d62441b191d-kube-api-access-4cbwv\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024869 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-sb\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024891 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-fernet-keys\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024937 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grmsw\" (UniqueName: \"kubernetes.io/projected/40a545af-34aa-4200-8e15-ee9b364da472-kube-api-access-grmsw\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024971 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-combined-ca-bundle\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.024991 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-config-data\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.026040 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-config\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.026157 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-dns-svc\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.027180 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-sb\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.030149 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-credential-keys\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.031738 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-nb\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.034275 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-scripts\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.043629 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-fernet-keys\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.044398 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-combined-ca-bundle\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.044536 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-config-data\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.047839 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grmsw\" (UniqueName: \"kubernetes.io/projected/40a545af-34aa-4200-8e15-ee9b364da472-kube-api-access-grmsw\") pod \"dnsmasq-dns-59f4bfbbb7-vm62h\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.048558 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cbwv\" (UniqueName: \"kubernetes.io/projected/46de4732-9733-4bf9-953a-9d62441b191d-kube-api-access-4cbwv\") pod \"keystone-bootstrap-tp9sg\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.065100 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.107362 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.545116 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59f4bfbbb7-vm62h"] Jan 21 15:59:19 crc kubenswrapper[4834]: I0121 15:59:19.673365 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tp9sg"] Jan 21 15:59:19 crc kubenswrapper[4834]: W0121 15:59:19.679555 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46de4732_9733_4bf9_953a_9d62441b191d.slice/crio-2c18293dca2610e4bbe12a800bf78bba990b7c3f0b62791e65d8e6863b083eff WatchSource:0}: Error finding container 2c18293dca2610e4bbe12a800bf78bba990b7c3f0b62791e65d8e6863b083eff: Status 404 returned error can't find the container with id 2c18293dca2610e4bbe12a800bf78bba990b7c3f0b62791e65d8e6863b083eff Jan 21 15:59:20 crc kubenswrapper[4834]: I0121 15:59:20.499109 4834 generic.go:334] "Generic (PLEG): container finished" podID="40a545af-34aa-4200-8e15-ee9b364da472" containerID="8d0029d6421db15fa0ff87b37c3f699d3fa7b476753f02ff1eb481873602f7e6" exitCode=0 Jan 21 15:59:20 crc kubenswrapper[4834]: I0121 15:59:20.499180 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" event={"ID":"40a545af-34aa-4200-8e15-ee9b364da472","Type":"ContainerDied","Data":"8d0029d6421db15fa0ff87b37c3f699d3fa7b476753f02ff1eb481873602f7e6"} Jan 21 15:59:20 crc kubenswrapper[4834]: I0121 15:59:20.499627 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" event={"ID":"40a545af-34aa-4200-8e15-ee9b364da472","Type":"ContainerStarted","Data":"1d4b7c7e1b260ec2721e9e01c86ea9386a805853df255bd0e3029669dd9056f9"} Jan 21 15:59:20 crc kubenswrapper[4834]: I0121 15:59:20.500596 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tp9sg" event={"ID":"46de4732-9733-4bf9-953a-9d62441b191d","Type":"ContainerStarted","Data":"ef5af3d246b352ee98e3abdf35236257301233915ea608d56070fa0ba6d643cc"} Jan 21 15:59:20 crc kubenswrapper[4834]: I0121 15:59:20.500614 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tp9sg" event={"ID":"46de4732-9733-4bf9-953a-9d62441b191d","Type":"ContainerStarted","Data":"2c18293dca2610e4bbe12a800bf78bba990b7c3f0b62791e65d8e6863b083eff"} Jan 21 15:59:20 crc kubenswrapper[4834]: I0121 15:59:20.547368 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tp9sg" podStartSLOduration=2.547346941 podStartE2EDuration="2.547346941s" podCreationTimestamp="2026-01-21 15:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:59:20.543815451 +0000 UTC m=+5306.518164506" watchObservedRunningTime="2026-01-21 15:59:20.547346941 +0000 UTC m=+5306.521695986" Jan 21 15:59:21 crc kubenswrapper[4834]: I0121 15:59:21.512717 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" event={"ID":"40a545af-34aa-4200-8e15-ee9b364da472","Type":"ContainerStarted","Data":"d90b20c899f69c46100fee828fdeaff3a4b6dd27a552768be89f9481c109f026"} Jan 21 15:59:21 crc kubenswrapper[4834]: I0121 15:59:21.545171 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" podStartSLOduration=3.545145212 podStartE2EDuration="3.545145212s" podCreationTimestamp="2026-01-21 15:59:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:59:21.533220431 +0000 UTC m=+5307.507569486" watchObservedRunningTime="2026-01-21 15:59:21.545145212 +0000 UTC m=+5307.519494297" Jan 21 15:59:22 crc kubenswrapper[4834]: I0121 15:59:22.519525 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:23 crc kubenswrapper[4834]: I0121 15:59:23.528527 4834 generic.go:334] "Generic (PLEG): container finished" podID="46de4732-9733-4bf9-953a-9d62441b191d" containerID="ef5af3d246b352ee98e3abdf35236257301233915ea608d56070fa0ba6d643cc" exitCode=0 Jan 21 15:59:23 crc kubenswrapper[4834]: I0121 15:59:23.528637 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tp9sg" event={"ID":"46de4732-9733-4bf9-953a-9d62441b191d","Type":"ContainerDied","Data":"ef5af3d246b352ee98e3abdf35236257301233915ea608d56070fa0ba6d643cc"} Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.853747 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.938181 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-credential-keys\") pod \"46de4732-9733-4bf9-953a-9d62441b191d\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.938317 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cbwv\" (UniqueName: \"kubernetes.io/projected/46de4732-9733-4bf9-953a-9d62441b191d-kube-api-access-4cbwv\") pod \"46de4732-9733-4bf9-953a-9d62441b191d\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.938432 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-scripts\") pod \"46de4732-9733-4bf9-953a-9d62441b191d\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.938479 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-config-data\") pod \"46de4732-9733-4bf9-953a-9d62441b191d\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.938600 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-combined-ca-bundle\") pod \"46de4732-9733-4bf9-953a-9d62441b191d\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.938647 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-fernet-keys\") pod \"46de4732-9733-4bf9-953a-9d62441b191d\" (UID: \"46de4732-9733-4bf9-953a-9d62441b191d\") " Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.944210 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "46de4732-9733-4bf9-953a-9d62441b191d" (UID: "46de4732-9733-4bf9-953a-9d62441b191d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.944259 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46de4732-9733-4bf9-953a-9d62441b191d-kube-api-access-4cbwv" (OuterVolumeSpecName: "kube-api-access-4cbwv") pod "46de4732-9733-4bf9-953a-9d62441b191d" (UID: "46de4732-9733-4bf9-953a-9d62441b191d"). InnerVolumeSpecName "kube-api-access-4cbwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.946861 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-scripts" (OuterVolumeSpecName: "scripts") pod "46de4732-9733-4bf9-953a-9d62441b191d" (UID: "46de4732-9733-4bf9-953a-9d62441b191d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.952181 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "46de4732-9733-4bf9-953a-9d62441b191d" (UID: "46de4732-9733-4bf9-953a-9d62441b191d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.962508 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-config-data" (OuterVolumeSpecName: "config-data") pod "46de4732-9733-4bf9-953a-9d62441b191d" (UID: "46de4732-9733-4bf9-953a-9d62441b191d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:24 crc kubenswrapper[4834]: I0121 15:59:24.969151 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46de4732-9733-4bf9-953a-9d62441b191d" (UID: "46de4732-9733-4bf9-953a-9d62441b191d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.040869 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.040909 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.040923 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.040951 4834 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.040963 4834 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46de4732-9733-4bf9-953a-9d62441b191d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.040975 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cbwv\" (UniqueName: \"kubernetes.io/projected/46de4732-9733-4bf9-953a-9d62441b191d-kube-api-access-4cbwv\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.542637 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tp9sg" event={"ID":"46de4732-9733-4bf9-953a-9d62441b191d","Type":"ContainerDied","Data":"2c18293dca2610e4bbe12a800bf78bba990b7c3f0b62791e65d8e6863b083eff"} Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.543033 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c18293dca2610e4bbe12a800bf78bba990b7c3f0b62791e65d8e6863b083eff" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.542676 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tp9sg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.625662 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tp9sg"] Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.631379 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tp9sg"] Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.734534 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2hgcg"] Jan 21 15:59:25 crc kubenswrapper[4834]: E0121 15:59:25.734990 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46de4732-9733-4bf9-953a-9d62441b191d" containerName="keystone-bootstrap" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.735008 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="46de4732-9733-4bf9-953a-9d62441b191d" containerName="keystone-bootstrap" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.735151 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="46de4732-9733-4bf9-953a-9d62441b191d" containerName="keystone-bootstrap" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.735726 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.738037 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.738440 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.738455 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.739499 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.742124 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pjpmd" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.742136 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2hgcg"] Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.859356 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-combined-ca-bundle\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.859450 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-fernet-keys\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.859486 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-config-data\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.859552 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-scripts\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.859685 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzgtd\" (UniqueName: \"kubernetes.io/projected/b2bf7832-b212-49cc-a6f9-3de2a895a837-kube-api-access-lzgtd\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.859755 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-credential-keys\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.961281 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzgtd\" (UniqueName: \"kubernetes.io/projected/b2bf7832-b212-49cc-a6f9-3de2a895a837-kube-api-access-lzgtd\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.961420 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-credential-keys\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.961467 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-combined-ca-bundle\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.961496 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-fernet-keys\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.961514 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-config-data\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.961547 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-scripts\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.967191 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-scripts\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.967555 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-fernet-keys\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.968394 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-config-data\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.968634 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-combined-ca-bundle\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.970967 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-credential-keys\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:25 crc kubenswrapper[4834]: I0121 15:59:25.980060 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzgtd\" (UniqueName: \"kubernetes.io/projected/b2bf7832-b212-49cc-a6f9-3de2a895a837-kube-api-access-lzgtd\") pod \"keystone-bootstrap-2hgcg\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:26 crc kubenswrapper[4834]: I0121 15:59:26.067541 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:26 crc kubenswrapper[4834]: I0121 15:59:26.325144 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 15:59:26 crc kubenswrapper[4834]: I0121 15:59:26.337284 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46de4732-9733-4bf9-953a-9d62441b191d" path="/var/lib/kubelet/pods/46de4732-9733-4bf9-953a-9d62441b191d/volumes" Jan 21 15:59:26 crc kubenswrapper[4834]: I0121 15:59:26.517718 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2hgcg"] Jan 21 15:59:26 crc kubenswrapper[4834]: W0121 15:59:26.521160 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2bf7832_b212_49cc_a6f9_3de2a895a837.slice/crio-dc9bf228a881d9cbb7b8e95e51402595bd4d8a35593afd4e0442a445062d02d6 WatchSource:0}: Error finding container dc9bf228a881d9cbb7b8e95e51402595bd4d8a35593afd4e0442a445062d02d6: Status 404 returned error can't find the container with id dc9bf228a881d9cbb7b8e95e51402595bd4d8a35593afd4e0442a445062d02d6 Jan 21 15:59:26 crc kubenswrapper[4834]: I0121 15:59:26.551904 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"d3bc39a16cddf51fe4af8641c77c90c2682ec58f862bb19d7774d650115f85e6"} Jan 21 15:59:26 crc kubenswrapper[4834]: I0121 15:59:26.552918 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2hgcg" event={"ID":"b2bf7832-b212-49cc-a6f9-3de2a895a837","Type":"ContainerStarted","Data":"dc9bf228a881d9cbb7b8e95e51402595bd4d8a35593afd4e0442a445062d02d6"} Jan 21 15:59:27 crc kubenswrapper[4834]: I0121 15:59:27.563893 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2hgcg" event={"ID":"b2bf7832-b212-49cc-a6f9-3de2a895a837","Type":"ContainerStarted","Data":"59ad7f30330276a6977ec5c3ae1386eb763e2ad6b682bf65ed5a31be3fee391c"} Jan 21 15:59:27 crc kubenswrapper[4834]: I0121 15:59:27.581969 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2hgcg" podStartSLOduration=2.581951149 podStartE2EDuration="2.581951149s" podCreationTimestamp="2026-01-21 15:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:59:27.581723202 +0000 UTC m=+5313.556072267" watchObservedRunningTime="2026-01-21 15:59:27.581951149 +0000 UTC m=+5313.556300194" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.066812 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.123913 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-577666b5dc-v7sz5"] Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.124194 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" podUID="8f4bbb26-78db-4a78-bb34-322259b6d35e" containerName="dnsmasq-dns" containerID="cri-o://09ceb7aa460369583e82475fe67466439c347b7b044b4272cf9354a778e1f523" gracePeriod=10 Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.580173 4834 generic.go:334] "Generic (PLEG): container finished" podID="b2bf7832-b212-49cc-a6f9-3de2a895a837" containerID="59ad7f30330276a6977ec5c3ae1386eb763e2ad6b682bf65ed5a31be3fee391c" exitCode=0 Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.580258 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2hgcg" event={"ID":"b2bf7832-b212-49cc-a6f9-3de2a895a837","Type":"ContainerDied","Data":"59ad7f30330276a6977ec5c3ae1386eb763e2ad6b682bf65ed5a31be3fee391c"} Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.582493 4834 generic.go:334] "Generic (PLEG): container finished" podID="8f4bbb26-78db-4a78-bb34-322259b6d35e" containerID="09ceb7aa460369583e82475fe67466439c347b7b044b4272cf9354a778e1f523" exitCode=0 Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.582531 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" event={"ID":"8f4bbb26-78db-4a78-bb34-322259b6d35e","Type":"ContainerDied","Data":"09ceb7aa460369583e82475fe67466439c347b7b044b4272cf9354a778e1f523"} Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.582553 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" event={"ID":"8f4bbb26-78db-4a78-bb34-322259b6d35e","Type":"ContainerDied","Data":"c8127ef4aea3dccd74b98892889908496d6fe591ffa23e438bd01f04c1968495"} Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.582564 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8127ef4aea3dccd74b98892889908496d6fe591ffa23e438bd01f04c1968495" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.590608 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.735313 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-config\") pod \"8f4bbb26-78db-4a78-bb34-322259b6d35e\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.735398 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-dns-svc\") pod \"8f4bbb26-78db-4a78-bb34-322259b6d35e\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.735488 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-sb\") pod \"8f4bbb26-78db-4a78-bb34-322259b6d35e\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.736347 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmhq2\" (UniqueName: \"kubernetes.io/projected/8f4bbb26-78db-4a78-bb34-322259b6d35e-kube-api-access-jmhq2\") pod \"8f4bbb26-78db-4a78-bb34-322259b6d35e\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.736382 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-nb\") pod \"8f4bbb26-78db-4a78-bb34-322259b6d35e\" (UID: \"8f4bbb26-78db-4a78-bb34-322259b6d35e\") " Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.741439 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4bbb26-78db-4a78-bb34-322259b6d35e-kube-api-access-jmhq2" (OuterVolumeSpecName: "kube-api-access-jmhq2") pod "8f4bbb26-78db-4a78-bb34-322259b6d35e" (UID: "8f4bbb26-78db-4a78-bb34-322259b6d35e"). InnerVolumeSpecName "kube-api-access-jmhq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.771908 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8f4bbb26-78db-4a78-bb34-322259b6d35e" (UID: "8f4bbb26-78db-4a78-bb34-322259b6d35e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.772942 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f4bbb26-78db-4a78-bb34-322259b6d35e" (UID: "8f4bbb26-78db-4a78-bb34-322259b6d35e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.774492 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8f4bbb26-78db-4a78-bb34-322259b6d35e" (UID: "8f4bbb26-78db-4a78-bb34-322259b6d35e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.778124 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-config" (OuterVolumeSpecName: "config") pod "8f4bbb26-78db-4a78-bb34-322259b6d35e" (UID: "8f4bbb26-78db-4a78-bb34-322259b6d35e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.838744 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmhq2\" (UniqueName: \"kubernetes.io/projected/8f4bbb26-78db-4a78-bb34-322259b6d35e-kube-api-access-jmhq2\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.838797 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.838810 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.838822 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:29 crc kubenswrapper[4834]: I0121 15:59:29.838835 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f4bbb26-78db-4a78-bb34-322259b6d35e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:30 crc kubenswrapper[4834]: I0121 15:59:30.591457 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577666b5dc-v7sz5" Jan 21 15:59:30 crc kubenswrapper[4834]: I0121 15:59:30.631851 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-577666b5dc-v7sz5"] Jan 21 15:59:30 crc kubenswrapper[4834]: I0121 15:59:30.639677 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-577666b5dc-v7sz5"] Jan 21 15:59:30 crc kubenswrapper[4834]: I0121 15:59:30.950490 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.060124 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-credential-keys\") pod \"b2bf7832-b212-49cc-a6f9-3de2a895a837\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.060265 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-fernet-keys\") pod \"b2bf7832-b212-49cc-a6f9-3de2a895a837\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.060308 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-combined-ca-bundle\") pod \"b2bf7832-b212-49cc-a6f9-3de2a895a837\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.060328 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-config-data\") pod \"b2bf7832-b212-49cc-a6f9-3de2a895a837\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.060406 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzgtd\" (UniqueName: \"kubernetes.io/projected/b2bf7832-b212-49cc-a6f9-3de2a895a837-kube-api-access-lzgtd\") pod \"b2bf7832-b212-49cc-a6f9-3de2a895a837\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.060427 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-scripts\") pod \"b2bf7832-b212-49cc-a6f9-3de2a895a837\" (UID: \"b2bf7832-b212-49cc-a6f9-3de2a895a837\") " Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.065417 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b2bf7832-b212-49cc-a6f9-3de2a895a837" (UID: "b2bf7832-b212-49cc-a6f9-3de2a895a837"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.065553 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2bf7832-b212-49cc-a6f9-3de2a895a837-kube-api-access-lzgtd" (OuterVolumeSpecName: "kube-api-access-lzgtd") pod "b2bf7832-b212-49cc-a6f9-3de2a895a837" (UID: "b2bf7832-b212-49cc-a6f9-3de2a895a837"). InnerVolumeSpecName "kube-api-access-lzgtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.067101 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b2bf7832-b212-49cc-a6f9-3de2a895a837" (UID: "b2bf7832-b212-49cc-a6f9-3de2a895a837"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.067789 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-scripts" (OuterVolumeSpecName: "scripts") pod "b2bf7832-b212-49cc-a6f9-3de2a895a837" (UID: "b2bf7832-b212-49cc-a6f9-3de2a895a837"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.082120 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-config-data" (OuterVolumeSpecName: "config-data") pod "b2bf7832-b212-49cc-a6f9-3de2a895a837" (UID: "b2bf7832-b212-49cc-a6f9-3de2a895a837"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.083279 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2bf7832-b212-49cc-a6f9-3de2a895a837" (UID: "b2bf7832-b212-49cc-a6f9-3de2a895a837"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.162164 4834 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.162194 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.162204 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.162218 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzgtd\" (UniqueName: \"kubernetes.io/projected/b2bf7832-b212-49cc-a6f9-3de2a895a837-kube-api-access-lzgtd\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.162231 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.162241 4834 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2bf7832-b212-49cc-a6f9-3de2a895a837-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.602786 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2hgcg" event={"ID":"b2bf7832-b212-49cc-a6f9-3de2a895a837","Type":"ContainerDied","Data":"dc9bf228a881d9cbb7b8e95e51402595bd4d8a35593afd4e0442a445062d02d6"} Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.602833 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc9bf228a881d9cbb7b8e95e51402595bd4d8a35593afd4e0442a445062d02d6" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.602906 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2hgcg" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.676730 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-75d69fc98c-bjw7h"] Jan 21 15:59:31 crc kubenswrapper[4834]: E0121 15:59:31.677085 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4bbb26-78db-4a78-bb34-322259b6d35e" containerName="init" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.677114 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4bbb26-78db-4a78-bb34-322259b6d35e" containerName="init" Jan 21 15:59:31 crc kubenswrapper[4834]: E0121 15:59:31.677132 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bf7832-b212-49cc-a6f9-3de2a895a837" containerName="keystone-bootstrap" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.677139 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bf7832-b212-49cc-a6f9-3de2a895a837" containerName="keystone-bootstrap" Jan 21 15:59:31 crc kubenswrapper[4834]: E0121 15:59:31.677158 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4bbb26-78db-4a78-bb34-322259b6d35e" containerName="dnsmasq-dns" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.677164 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4bbb26-78db-4a78-bb34-322259b6d35e" containerName="dnsmasq-dns" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.677304 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4bbb26-78db-4a78-bb34-322259b6d35e" containerName="dnsmasq-dns" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.677320 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bf7832-b212-49cc-a6f9-3de2a895a837" containerName="keystone-bootstrap" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.677873 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.681021 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.681293 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.681426 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pjpmd" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.693123 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.713613 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75d69fc98c-bjw7h"] Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.772872 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-scripts\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.772948 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-combined-ca-bundle\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.772983 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-config-data\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.773015 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-fernet-keys\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.773035 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxcs\" (UniqueName: \"kubernetes.io/projected/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-kube-api-access-gjxcs\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.773085 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-credential-keys\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.874620 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-scripts\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.874700 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-combined-ca-bundle\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.874742 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-config-data\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.874771 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-fernet-keys\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.874791 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxcs\" (UniqueName: \"kubernetes.io/projected/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-kube-api-access-gjxcs\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.874837 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-credential-keys\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.880069 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-config-data\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.880543 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-scripts\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.881944 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-combined-ca-bundle\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.882561 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-credential-keys\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.884724 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-fernet-keys\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:31 crc kubenswrapper[4834]: I0121 15:59:31.894150 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxcs\" (UniqueName: \"kubernetes.io/projected/e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b-kube-api-access-gjxcs\") pod \"keystone-75d69fc98c-bjw7h\" (UID: \"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b\") " pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:32 crc kubenswrapper[4834]: I0121 15:59:32.003521 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:32 crc kubenswrapper[4834]: I0121 15:59:32.339950 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4bbb26-78db-4a78-bb34-322259b6d35e" path="/var/lib/kubelet/pods/8f4bbb26-78db-4a78-bb34-322259b6d35e/volumes" Jan 21 15:59:32 crc kubenswrapper[4834]: I0121 15:59:32.466919 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-75d69fc98c-bjw7h"] Jan 21 15:59:32 crc kubenswrapper[4834]: I0121 15:59:32.611674 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75d69fc98c-bjw7h" event={"ID":"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b","Type":"ContainerStarted","Data":"ca11232e9e52b7ce4234d85ff5e08ab5415c5443d3177c4c6dd6a22f3b4f6783"} Jan 21 15:59:33 crc kubenswrapper[4834]: I0121 15:59:33.626260 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-75d69fc98c-bjw7h" event={"ID":"e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b","Type":"ContainerStarted","Data":"850b43d9bd6c16000133091c1573ab0dc18d7817314f817e09b24fa276ef6cba"} Jan 21 15:59:33 crc kubenswrapper[4834]: I0121 15:59:33.626719 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 15:59:33 crc kubenswrapper[4834]: I0121 15:59:33.661657 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-75d69fc98c-bjw7h" podStartSLOduration=2.661630584 podStartE2EDuration="2.661630584s" podCreationTimestamp="2026-01-21 15:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:59:33.655018348 +0000 UTC m=+5319.629367413" watchObservedRunningTime="2026-01-21 15:59:33.661630584 +0000 UTC m=+5319.635979639" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.580709 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4p6p4"] Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.582797 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.595520 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p6p4"] Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.618032 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-catalog-content\") pod \"redhat-marketplace-4p6p4\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.618311 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzlbn\" (UniqueName: \"kubernetes.io/projected/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-kube-api-access-qzlbn\") pod \"redhat-marketplace-4p6p4\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.618400 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-utilities\") pod \"redhat-marketplace-4p6p4\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.720261 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzlbn\" (UniqueName: \"kubernetes.io/projected/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-kube-api-access-qzlbn\") pod \"redhat-marketplace-4p6p4\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.720332 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-utilities\") pod \"redhat-marketplace-4p6p4\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.720911 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-utilities\") pod \"redhat-marketplace-4p6p4\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.721323 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-catalog-content\") pod \"redhat-marketplace-4p6p4\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.721660 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-catalog-content\") pod \"redhat-marketplace-4p6p4\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.742328 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzlbn\" (UniqueName: \"kubernetes.io/projected/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-kube-api-access-qzlbn\") pod \"redhat-marketplace-4p6p4\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:34 crc kubenswrapper[4834]: I0121 15:59:34.918239 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:35 crc kubenswrapper[4834]: I0121 15:59:35.384955 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p6p4"] Jan 21 15:59:35 crc kubenswrapper[4834]: W0121 15:59:35.389070 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0ec6422_e0d2_4b20_a76f_3c8534d7564b.slice/crio-4bf0d266a10fff7bb2fb6c7718f316ecfb4b90e63a910f65934639cfc7036c20 WatchSource:0}: Error finding container 4bf0d266a10fff7bb2fb6c7718f316ecfb4b90e63a910f65934639cfc7036c20: Status 404 returned error can't find the container with id 4bf0d266a10fff7bb2fb6c7718f316ecfb4b90e63a910f65934639cfc7036c20 Jan 21 15:59:35 crc kubenswrapper[4834]: I0121 15:59:35.644102 4834 generic.go:334] "Generic (PLEG): container finished" podID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerID="e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42" exitCode=0 Jan 21 15:59:35 crc kubenswrapper[4834]: I0121 15:59:35.644157 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p6p4" event={"ID":"b0ec6422-e0d2-4b20-a76f-3c8534d7564b","Type":"ContainerDied","Data":"e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42"} Jan 21 15:59:35 crc kubenswrapper[4834]: I0121 15:59:35.644605 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p6p4" event={"ID":"b0ec6422-e0d2-4b20-a76f-3c8534d7564b","Type":"ContainerStarted","Data":"4bf0d266a10fff7bb2fb6c7718f316ecfb4b90e63a910f65934639cfc7036c20"} Jan 21 15:59:36 crc kubenswrapper[4834]: I0121 15:59:36.653818 4834 generic.go:334] "Generic (PLEG): container finished" podID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerID="b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0" exitCode=0 Jan 21 15:59:36 crc kubenswrapper[4834]: I0121 15:59:36.653919 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p6p4" event={"ID":"b0ec6422-e0d2-4b20-a76f-3c8534d7564b","Type":"ContainerDied","Data":"b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0"} Jan 21 15:59:37 crc kubenswrapper[4834]: I0121 15:59:37.665798 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p6p4" event={"ID":"b0ec6422-e0d2-4b20-a76f-3c8534d7564b","Type":"ContainerStarted","Data":"a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32"} Jan 21 15:59:37 crc kubenswrapper[4834]: I0121 15:59:37.686692 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4p6p4" podStartSLOduration=2.268558646 podStartE2EDuration="3.686664671s" podCreationTimestamp="2026-01-21 15:59:34 +0000 UTC" firstStartedPulling="2026-01-21 15:59:35.646896806 +0000 UTC m=+5321.621245851" lastFinishedPulling="2026-01-21 15:59:37.065002821 +0000 UTC m=+5323.039351876" observedRunningTime="2026-01-21 15:59:37.68311889 +0000 UTC m=+5323.657467945" watchObservedRunningTime="2026-01-21 15:59:37.686664671 +0000 UTC m=+5323.661013716" Jan 21 15:59:44 crc kubenswrapper[4834]: I0121 15:59:44.918778 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:44 crc kubenswrapper[4834]: I0121 15:59:44.919528 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:44 crc kubenswrapper[4834]: I0121 15:59:44.976765 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:45 crc kubenswrapper[4834]: I0121 15:59:45.781581 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:45 crc kubenswrapper[4834]: I0121 15:59:45.844164 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p6p4"] Jan 21 15:59:47 crc kubenswrapper[4834]: I0121 15:59:47.742986 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4p6p4" podUID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerName="registry-server" containerID="cri-o://a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32" gracePeriod=2 Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.730023 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.754964 4834 generic.go:334] "Generic (PLEG): container finished" podID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerID="a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32" exitCode=0 Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.755116 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p6p4" event={"ID":"b0ec6422-e0d2-4b20-a76f-3c8534d7564b","Type":"ContainerDied","Data":"a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32"} Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.755384 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4p6p4" event={"ID":"b0ec6422-e0d2-4b20-a76f-3c8534d7564b","Type":"ContainerDied","Data":"4bf0d266a10fff7bb2fb6c7718f316ecfb4b90e63a910f65934639cfc7036c20"} Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.755210 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4p6p4" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.755420 4834 scope.go:117] "RemoveContainer" containerID="a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.775724 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzlbn\" (UniqueName: \"kubernetes.io/projected/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-kube-api-access-qzlbn\") pod \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.775825 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-utilities\") pod \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.776044 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-catalog-content\") pod \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\" (UID: \"b0ec6422-e0d2-4b20-a76f-3c8534d7564b\") " Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.777218 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-utilities" (OuterVolumeSpecName: "utilities") pod "b0ec6422-e0d2-4b20-a76f-3c8534d7564b" (UID: "b0ec6422-e0d2-4b20-a76f-3c8534d7564b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.782843 4834 scope.go:117] "RemoveContainer" containerID="b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.783032 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-kube-api-access-qzlbn" (OuterVolumeSpecName: "kube-api-access-qzlbn") pod "b0ec6422-e0d2-4b20-a76f-3c8534d7564b" (UID: "b0ec6422-e0d2-4b20-a76f-3c8534d7564b"). InnerVolumeSpecName "kube-api-access-qzlbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.810482 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0ec6422-e0d2-4b20-a76f-3c8534d7564b" (UID: "b0ec6422-e0d2-4b20-a76f-3c8534d7564b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.826163 4834 scope.go:117] "RemoveContainer" containerID="e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.856736 4834 scope.go:117] "RemoveContainer" containerID="a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32" Jan 21 15:59:48 crc kubenswrapper[4834]: E0121 15:59:48.857389 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32\": container with ID starting with a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32 not found: ID does not exist" containerID="a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.857448 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32"} err="failed to get container status \"a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32\": rpc error: code = NotFound desc = could not find container \"a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32\": container with ID starting with a946d482e1b3ed92979a4267e938e1ad9673ae4807db5bcb314e2e2740c5ee32 not found: ID does not exist" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.857478 4834 scope.go:117] "RemoveContainer" containerID="b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0" Jan 21 15:59:48 crc kubenswrapper[4834]: E0121 15:59:48.857976 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0\": container with ID starting with b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0 not found: ID does not exist" containerID="b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.858017 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0"} err="failed to get container status \"b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0\": rpc error: code = NotFound desc = could not find container \"b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0\": container with ID starting with b90126ae40d5303a972055b0f06ef8f38056f73d4854cf8f6524e724ab9d7ae0 not found: ID does not exist" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.858052 4834 scope.go:117] "RemoveContainer" containerID="e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42" Jan 21 15:59:48 crc kubenswrapper[4834]: E0121 15:59:48.858454 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42\": container with ID starting with e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42 not found: ID does not exist" containerID="e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.858493 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42"} err="failed to get container status \"e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42\": rpc error: code = NotFound desc = could not find container \"e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42\": container with ID starting with e3d7f5cf7a7f9f21b61c82962b4ef92eac572f2dd410c55b02a63f3a2d66ef42 not found: ID does not exist" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.878595 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.878627 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzlbn\" (UniqueName: \"kubernetes.io/projected/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-kube-api-access-qzlbn\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:48 crc kubenswrapper[4834]: I0121 15:59:48.878641 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0ec6422-e0d2-4b20-a76f-3c8534d7564b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:49 crc kubenswrapper[4834]: I0121 15:59:49.149402 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p6p4"] Jan 21 15:59:49 crc kubenswrapper[4834]: I0121 15:59:49.158178 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4p6p4"] Jan 21 15:59:50 crc kubenswrapper[4834]: I0121 15:59:50.344828 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" path="/var/lib/kubelet/pods/b0ec6422-e0d2-4b20-a76f-3c8534d7564b/volumes" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.149629 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l"] Jan 21 16:00:00 crc kubenswrapper[4834]: E0121 16:00:00.151359 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerName="registry-server" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.151384 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerName="registry-server" Jan 21 16:00:00 crc kubenswrapper[4834]: E0121 16:00:00.151408 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerName="extract-utilities" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.151418 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerName="extract-utilities" Jan 21 16:00:00 crc kubenswrapper[4834]: E0121 16:00:00.151433 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerName="extract-content" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.151441 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerName="extract-content" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.151696 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ec6422-e0d2-4b20-a76f-3c8534d7564b" containerName="registry-server" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.152788 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.156353 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.157984 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.166108 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l"] Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.288344 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29a68c6-c05b-4db2-aa94-52932db05d0b-config-volume\") pod \"collect-profiles-29483520-dws5l\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.288408 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp6cm\" (UniqueName: \"kubernetes.io/projected/c29a68c6-c05b-4db2-aa94-52932db05d0b-kube-api-access-wp6cm\") pod \"collect-profiles-29483520-dws5l\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.288482 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29a68c6-c05b-4db2-aa94-52932db05d0b-secret-volume\") pod \"collect-profiles-29483520-dws5l\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.390552 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29a68c6-c05b-4db2-aa94-52932db05d0b-secret-volume\") pod \"collect-profiles-29483520-dws5l\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.390677 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29a68c6-c05b-4db2-aa94-52932db05d0b-config-volume\") pod \"collect-profiles-29483520-dws5l\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.390726 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp6cm\" (UniqueName: \"kubernetes.io/projected/c29a68c6-c05b-4db2-aa94-52932db05d0b-kube-api-access-wp6cm\") pod \"collect-profiles-29483520-dws5l\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.391891 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29a68c6-c05b-4db2-aa94-52932db05d0b-config-volume\") pod \"collect-profiles-29483520-dws5l\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.398397 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29a68c6-c05b-4db2-aa94-52932db05d0b-secret-volume\") pod \"collect-profiles-29483520-dws5l\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.412119 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp6cm\" (UniqueName: \"kubernetes.io/projected/c29a68c6-c05b-4db2-aa94-52932db05d0b-kube-api-access-wp6cm\") pod \"collect-profiles-29483520-dws5l\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.490984 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.741235 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l"] Jan 21 16:00:00 crc kubenswrapper[4834]: W0121 16:00:00.753609 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc29a68c6_c05b_4db2_aa94_52932db05d0b.slice/crio-a56225969eaee444377409fdd66aadae77a4c651d4d61865ff1e0fb7b2a72c68 WatchSource:0}: Error finding container a56225969eaee444377409fdd66aadae77a4c651d4d61865ff1e0fb7b2a72c68: Status 404 returned error can't find the container with id a56225969eaee444377409fdd66aadae77a4c651d4d61865ff1e0fb7b2a72c68 Jan 21 16:00:00 crc kubenswrapper[4834]: I0121 16:00:00.851536 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" event={"ID":"c29a68c6-c05b-4db2-aa94-52932db05d0b","Type":"ContainerStarted","Data":"a56225969eaee444377409fdd66aadae77a4c651d4d61865ff1e0fb7b2a72c68"} Jan 21 16:00:02 crc kubenswrapper[4834]: I0121 16:00:02.878109 4834 generic.go:334] "Generic (PLEG): container finished" podID="c29a68c6-c05b-4db2-aa94-52932db05d0b" containerID="07f4a4d5045f5863709cbb7e00d827610d53702f8e0c223c48eba0ed37daea14" exitCode=0 Jan 21 16:00:02 crc kubenswrapper[4834]: I0121 16:00:02.878213 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" event={"ID":"c29a68c6-c05b-4db2-aa94-52932db05d0b","Type":"ContainerDied","Data":"07f4a4d5045f5863709cbb7e00d827610d53702f8e0c223c48eba0ed37daea14"} Jan 21 16:00:03 crc kubenswrapper[4834]: I0121 16:00:03.757614 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-75d69fc98c-bjw7h" Jan 21 16:00:03 crc kubenswrapper[4834]: I0121 16:00:03.978616 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qsfvv"] Jan 21 16:00:03 crc kubenswrapper[4834]: I0121 16:00:03.980859 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:03 crc kubenswrapper[4834]: I0121 16:00:03.988463 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsfvv"] Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.060982 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwqql\" (UniqueName: \"kubernetes.io/projected/8f862404-2351-46d9-8305-3dbe419f82df-kube-api-access-bwqql\") pod \"certified-operators-qsfvv\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.061194 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-catalog-content\") pod \"certified-operators-qsfvv\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.061304 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-utilities\") pod \"certified-operators-qsfvv\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.163111 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-catalog-content\") pod \"certified-operators-qsfvv\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.163227 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-utilities\") pod \"certified-operators-qsfvv\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.163281 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwqql\" (UniqueName: \"kubernetes.io/projected/8f862404-2351-46d9-8305-3dbe419f82df-kube-api-access-bwqql\") pod \"certified-operators-qsfvv\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.163955 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-catalog-content\") pod \"certified-operators-qsfvv\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.164119 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-utilities\") pod \"certified-operators-qsfvv\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.189303 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwqql\" (UniqueName: \"kubernetes.io/projected/8f862404-2351-46d9-8305-3dbe419f82df-kube-api-access-bwqql\") pod \"certified-operators-qsfvv\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.271425 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.313764 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.364976 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp6cm\" (UniqueName: \"kubernetes.io/projected/c29a68c6-c05b-4db2-aa94-52932db05d0b-kube-api-access-wp6cm\") pod \"c29a68c6-c05b-4db2-aa94-52932db05d0b\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.365105 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29a68c6-c05b-4db2-aa94-52932db05d0b-secret-volume\") pod \"c29a68c6-c05b-4db2-aa94-52932db05d0b\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.365236 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29a68c6-c05b-4db2-aa94-52932db05d0b-config-volume\") pod \"c29a68c6-c05b-4db2-aa94-52932db05d0b\" (UID: \"c29a68c6-c05b-4db2-aa94-52932db05d0b\") " Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.366261 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29a68c6-c05b-4db2-aa94-52932db05d0b-config-volume" (OuterVolumeSpecName: "config-volume") pod "c29a68c6-c05b-4db2-aa94-52932db05d0b" (UID: "c29a68c6-c05b-4db2-aa94-52932db05d0b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.373059 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29a68c6-c05b-4db2-aa94-52932db05d0b-kube-api-access-wp6cm" (OuterVolumeSpecName: "kube-api-access-wp6cm") pod "c29a68c6-c05b-4db2-aa94-52932db05d0b" (UID: "c29a68c6-c05b-4db2-aa94-52932db05d0b"). InnerVolumeSpecName "kube-api-access-wp6cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.379641 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29a68c6-c05b-4db2-aa94-52932db05d0b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c29a68c6-c05b-4db2-aa94-52932db05d0b" (UID: "c29a68c6-c05b-4db2-aa94-52932db05d0b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.468270 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c29a68c6-c05b-4db2-aa94-52932db05d0b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.468314 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c29a68c6-c05b-4db2-aa94-52932db05d0b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.468328 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp6cm\" (UniqueName: \"kubernetes.io/projected/c29a68c6-c05b-4db2-aa94-52932db05d0b-kube-api-access-wp6cm\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.846944 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsfvv"] Jan 21 16:00:04 crc kubenswrapper[4834]: W0121 16:00:04.857609 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f862404_2351_46d9_8305_3dbe419f82df.slice/crio-a194521aefaa1ac4c86468d7dba8e9d11cc3116df6c5910201eafb6764418c48 WatchSource:0}: Error finding container a194521aefaa1ac4c86468d7dba8e9d11cc3116df6c5910201eafb6764418c48: Status 404 returned error can't find the container with id a194521aefaa1ac4c86468d7dba8e9d11cc3116df6c5910201eafb6764418c48 Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.896866 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" event={"ID":"c29a68c6-c05b-4db2-aa94-52932db05d0b","Type":"ContainerDied","Data":"a56225969eaee444377409fdd66aadae77a4c651d4d61865ff1e0fb7b2a72c68"} Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.896915 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a56225969eaee444377409fdd66aadae77a4c651d4d61865ff1e0fb7b2a72c68" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.896994 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l" Jan 21 16:00:04 crc kubenswrapper[4834]: I0121 16:00:04.904489 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsfvv" event={"ID":"8f862404-2351-46d9-8305-3dbe419f82df","Type":"ContainerStarted","Data":"a194521aefaa1ac4c86468d7dba8e9d11cc3116df6c5910201eafb6764418c48"} Jan 21 16:00:05 crc kubenswrapper[4834]: I0121 16:00:05.351802 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw"] Jan 21 16:00:05 crc kubenswrapper[4834]: I0121 16:00:05.359685 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483475-ln2xw"] Jan 21 16:00:05 crc kubenswrapper[4834]: I0121 16:00:05.914955 4834 generic.go:334] "Generic (PLEG): container finished" podID="8f862404-2351-46d9-8305-3dbe419f82df" containerID="4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41" exitCode=0 Jan 21 16:00:05 crc kubenswrapper[4834]: I0121 16:00:05.915006 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsfvv" event={"ID":"8f862404-2351-46d9-8305-3dbe419f82df","Type":"ContainerDied","Data":"4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41"} Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.333911 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6efbad13-9170-43d9-b945-b022f283ef27" path="/var/lib/kubelet/pods/6efbad13-9170-43d9-b945-b022f283ef27/volumes" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.785796 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:00:06 crc kubenswrapper[4834]: E0121 16:00:06.786204 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29a68c6-c05b-4db2-aa94-52932db05d0b" containerName="collect-profiles" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.786223 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29a68c6-c05b-4db2-aa94-52932db05d0b" containerName="collect-profiles" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.786439 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29a68c6-c05b-4db2-aa94-52932db05d0b" containerName="collect-profiles" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.787107 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.791674 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.791784 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lhtlg" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.791674 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.801872 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.909957 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config\") pod \"openstackclient\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " pod="openstack/openstackclient" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.910031 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config-secret\") pod \"openstackclient\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " pod="openstack/openstackclient" Jan 21 16:00:06 crc kubenswrapper[4834]: I0121 16:00:06.910083 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffb4l\" (UniqueName: \"kubernetes.io/projected/2227a23c-0978-4e07-836f-8077d3190e67-kube-api-access-ffb4l\") pod \"openstackclient\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " pod="openstack/openstackclient" Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.011543 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config\") pod \"openstackclient\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " pod="openstack/openstackclient" Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.011590 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config-secret\") pod \"openstackclient\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " pod="openstack/openstackclient" Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.011657 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffb4l\" (UniqueName: \"kubernetes.io/projected/2227a23c-0978-4e07-836f-8077d3190e67-kube-api-access-ffb4l\") pod \"openstackclient\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " pod="openstack/openstackclient" Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.012817 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config\") pod \"openstackclient\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " pod="openstack/openstackclient" Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.023430 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config-secret\") pod \"openstackclient\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " pod="openstack/openstackclient" Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.028388 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffb4l\" (UniqueName: \"kubernetes.io/projected/2227a23c-0978-4e07-836f-8077d3190e67-kube-api-access-ffb4l\") pod \"openstackclient\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " pod="openstack/openstackclient" Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.119372 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.560507 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.635072 4834 scope.go:117] "RemoveContainer" containerID="89b80a3d7d218b2b4f3a28be4e22ce842691c5afe1428895e94ea3cfdeafa253" Jan 21 16:00:07 crc kubenswrapper[4834]: I0121 16:00:07.930340 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2227a23c-0978-4e07-836f-8077d3190e67","Type":"ContainerStarted","Data":"b488765289166c31c90e6b6246b6af256d9724484fc8dabd3dea6c2fa346f5bf"} Jan 21 16:00:08 crc kubenswrapper[4834]: I0121 16:00:08.940648 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2227a23c-0978-4e07-836f-8077d3190e67","Type":"ContainerStarted","Data":"69f2f07ce40fbd5b7ec8f15411465be2eff92a5210ec257f8fe566e9db4d66d8"} Jan 21 16:00:08 crc kubenswrapper[4834]: I0121 16:00:08.962620 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.962598362 podStartE2EDuration="2.962598362s" podCreationTimestamp="2026-01-21 16:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:00:08.955340556 +0000 UTC m=+5354.929689611" watchObservedRunningTime="2026-01-21 16:00:08.962598362 +0000 UTC m=+5354.936947417" Jan 21 16:00:09 crc kubenswrapper[4834]: I0121 16:00:09.950708 4834 generic.go:334] "Generic (PLEG): container finished" podID="8f862404-2351-46d9-8305-3dbe419f82df" containerID="9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc" exitCode=0 Jan 21 16:00:09 crc kubenswrapper[4834]: I0121 16:00:09.950803 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsfvv" event={"ID":"8f862404-2351-46d9-8305-3dbe419f82df","Type":"ContainerDied","Data":"9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc"} Jan 21 16:00:11 crc kubenswrapper[4834]: I0121 16:00:11.968664 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsfvv" event={"ID":"8f862404-2351-46d9-8305-3dbe419f82df","Type":"ContainerStarted","Data":"4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a"} Jan 21 16:00:11 crc kubenswrapper[4834]: I0121 16:00:11.995061 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qsfvv" podStartSLOduration=4.082675926 podStartE2EDuration="8.995036273s" podCreationTimestamp="2026-01-21 16:00:03 +0000 UTC" firstStartedPulling="2026-01-21 16:00:05.916873026 +0000 UTC m=+5351.891222071" lastFinishedPulling="2026-01-21 16:00:10.829233373 +0000 UTC m=+5356.803582418" observedRunningTime="2026-01-21 16:00:11.987669463 +0000 UTC m=+5357.962018518" watchObservedRunningTime="2026-01-21 16:00:11.995036273 +0000 UTC m=+5357.969385308" Jan 21 16:00:15 crc kubenswrapper[4834]: I0121 16:00:15.356595 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:15 crc kubenswrapper[4834]: I0121 16:00:15.407452 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:15 crc kubenswrapper[4834]: I0121 16:00:15.505016 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:16 crc kubenswrapper[4834]: I0121 16:00:16.446662 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:16 crc kubenswrapper[4834]: I0121 16:00:16.509632 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsfvv"] Jan 21 16:00:18 crc kubenswrapper[4834]: I0121 16:00:18.415977 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qsfvv" podUID="8f862404-2351-46d9-8305-3dbe419f82df" containerName="registry-server" containerID="cri-o://4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a" gracePeriod=2 Jan 21 16:00:18 crc kubenswrapper[4834]: I0121 16:00:18.824709 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:18 crc kubenswrapper[4834]: I0121 16:00:18.921577 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-catalog-content\") pod \"8f862404-2351-46d9-8305-3dbe419f82df\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " Jan 21 16:00:18 crc kubenswrapper[4834]: I0121 16:00:18.921633 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-utilities\") pod \"8f862404-2351-46d9-8305-3dbe419f82df\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " Jan 21 16:00:18 crc kubenswrapper[4834]: I0121 16:00:18.921813 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwqql\" (UniqueName: \"kubernetes.io/projected/8f862404-2351-46d9-8305-3dbe419f82df-kube-api-access-bwqql\") pod \"8f862404-2351-46d9-8305-3dbe419f82df\" (UID: \"8f862404-2351-46d9-8305-3dbe419f82df\") " Jan 21 16:00:18 crc kubenswrapper[4834]: I0121 16:00:18.922732 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-utilities" (OuterVolumeSpecName: "utilities") pod "8f862404-2351-46d9-8305-3dbe419f82df" (UID: "8f862404-2351-46d9-8305-3dbe419f82df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:00:18 crc kubenswrapper[4834]: I0121 16:00:18.933219 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f862404-2351-46d9-8305-3dbe419f82df-kube-api-access-bwqql" (OuterVolumeSpecName: "kube-api-access-bwqql") pod "8f862404-2351-46d9-8305-3dbe419f82df" (UID: "8f862404-2351-46d9-8305-3dbe419f82df"). InnerVolumeSpecName "kube-api-access-bwqql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:18 crc kubenswrapper[4834]: I0121 16:00:18.985315 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f862404-2351-46d9-8305-3dbe419f82df" (UID: "8f862404-2351-46d9-8305-3dbe419f82df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.024014 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwqql\" (UniqueName: \"kubernetes.io/projected/8f862404-2351-46d9-8305-3dbe419f82df-kube-api-access-bwqql\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.024052 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.024063 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f862404-2351-46d9-8305-3dbe419f82df-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.427733 4834 generic.go:334] "Generic (PLEG): container finished" podID="8f862404-2351-46d9-8305-3dbe419f82df" containerID="4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a" exitCode=0 Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.427771 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsfvv" event={"ID":"8f862404-2351-46d9-8305-3dbe419f82df","Type":"ContainerDied","Data":"4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a"} Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.427792 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsfvv" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.427809 4834 scope.go:117] "RemoveContainer" containerID="4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.427798 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsfvv" event={"ID":"8f862404-2351-46d9-8305-3dbe419f82df","Type":"ContainerDied","Data":"a194521aefaa1ac4c86468d7dba8e9d11cc3116df6c5910201eafb6764418c48"} Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.457834 4834 scope.go:117] "RemoveContainer" containerID="9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.460910 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsfvv"] Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.466890 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qsfvv"] Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.482546 4834 scope.go:117] "RemoveContainer" containerID="4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.514265 4834 scope.go:117] "RemoveContainer" containerID="4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a" Jan 21 16:00:19 crc kubenswrapper[4834]: E0121 16:00:19.514952 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a\": container with ID starting with 4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a not found: ID does not exist" containerID="4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.515007 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a"} err="failed to get container status \"4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a\": rpc error: code = NotFound desc = could not find container \"4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a\": container with ID starting with 4bafd2d245608cbc483ca2c993d2bca83201ef83c9470057aabb4ced8161034a not found: ID does not exist" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.515043 4834 scope.go:117] "RemoveContainer" containerID="9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc" Jan 21 16:00:19 crc kubenswrapper[4834]: E0121 16:00:19.515415 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc\": container with ID starting with 9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc not found: ID does not exist" containerID="9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.515452 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc"} err="failed to get container status \"9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc\": rpc error: code = NotFound desc = could not find container \"9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc\": container with ID starting with 9774076e151291b2c72aaf3ef44e477f7593043a5c65cf6f9a5a7b4b829d85cc not found: ID does not exist" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.515476 4834 scope.go:117] "RemoveContainer" containerID="4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41" Jan 21 16:00:19 crc kubenswrapper[4834]: E0121 16:00:19.515689 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41\": container with ID starting with 4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41 not found: ID does not exist" containerID="4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41" Jan 21 16:00:19 crc kubenswrapper[4834]: I0121 16:00:19.515717 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41"} err="failed to get container status \"4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41\": rpc error: code = NotFound desc = could not find container \"4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41\": container with ID starting with 4c8730928cadabc48758a753dd97519aa8dd5b3843d4c5edd4316cc144fb7c41 not found: ID does not exist" Jan 21 16:00:20 crc kubenswrapper[4834]: I0121 16:00:20.336006 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f862404-2351-46d9-8305-3dbe419f82df" path="/var/lib/kubelet/pods/8f862404-2351-46d9-8305-3dbe419f82df/volumes" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.161875 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483521-9g5jz"] Jan 21 16:01:00 crc kubenswrapper[4834]: E0121 16:01:00.162977 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f862404-2351-46d9-8305-3dbe419f82df" containerName="extract-content" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.162995 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f862404-2351-46d9-8305-3dbe419f82df" containerName="extract-content" Jan 21 16:01:00 crc kubenswrapper[4834]: E0121 16:01:00.163018 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f862404-2351-46d9-8305-3dbe419f82df" containerName="registry-server" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.163025 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f862404-2351-46d9-8305-3dbe419f82df" containerName="registry-server" Jan 21 16:01:00 crc kubenswrapper[4834]: E0121 16:01:00.163041 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f862404-2351-46d9-8305-3dbe419f82df" containerName="extract-utilities" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.163069 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f862404-2351-46d9-8305-3dbe419f82df" containerName="extract-utilities" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.163257 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f862404-2351-46d9-8305-3dbe419f82df" containerName="registry-server" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.164078 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.176455 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483521-9g5jz"] Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.304635 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-config-data\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.304807 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6c7\" (UniqueName: \"kubernetes.io/projected/af6b62a3-7329-456a-8ae7-bbd111be156c-kube-api-access-cd6c7\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.305083 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-combined-ca-bundle\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.305287 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-fernet-keys\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.406871 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6c7\" (UniqueName: \"kubernetes.io/projected/af6b62a3-7329-456a-8ae7-bbd111be156c-kube-api-access-cd6c7\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.406960 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-combined-ca-bundle\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.407002 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-fernet-keys\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.407079 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-config-data\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.413272 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-combined-ca-bundle\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.413366 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-config-data\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.414160 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-fernet-keys\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.425358 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6c7\" (UniqueName: \"kubernetes.io/projected/af6b62a3-7329-456a-8ae7-bbd111be156c-kube-api-access-cd6c7\") pod \"keystone-cron-29483521-9g5jz\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.489701 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:00 crc kubenswrapper[4834]: I0121 16:01:00.906459 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483521-9g5jz"] Jan 21 16:01:01 crc kubenswrapper[4834]: I0121 16:01:01.769986 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-9g5jz" event={"ID":"af6b62a3-7329-456a-8ae7-bbd111be156c","Type":"ContainerStarted","Data":"ec91f5c666bafefeac983c4ed708de8c86e03c616d28701198dde734768d6f03"} Jan 21 16:01:01 crc kubenswrapper[4834]: I0121 16:01:01.770371 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-9g5jz" event={"ID":"af6b62a3-7329-456a-8ae7-bbd111be156c","Type":"ContainerStarted","Data":"e5fcec23b37a8267e8e16f0d77473b4d1b03498d3d85642fdce1b1fa35a6ba71"} Jan 21 16:01:03 crc kubenswrapper[4834]: I0121 16:01:03.785745 4834 generic.go:334] "Generic (PLEG): container finished" podID="af6b62a3-7329-456a-8ae7-bbd111be156c" containerID="ec91f5c666bafefeac983c4ed708de8c86e03c616d28701198dde734768d6f03" exitCode=0 Jan 21 16:01:03 crc kubenswrapper[4834]: I0121 16:01:03.785965 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-9g5jz" event={"ID":"af6b62a3-7329-456a-8ae7-bbd111be156c","Type":"ContainerDied","Data":"ec91f5c666bafefeac983c4ed708de8c86e03c616d28701198dde734768d6f03"} Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.135115 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.298179 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-combined-ca-bundle\") pod \"af6b62a3-7329-456a-8ae7-bbd111be156c\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.298267 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-fernet-keys\") pod \"af6b62a3-7329-456a-8ae7-bbd111be156c\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.299185 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-config-data\") pod \"af6b62a3-7329-456a-8ae7-bbd111be156c\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.299267 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd6c7\" (UniqueName: \"kubernetes.io/projected/af6b62a3-7329-456a-8ae7-bbd111be156c-kube-api-access-cd6c7\") pod \"af6b62a3-7329-456a-8ae7-bbd111be156c\" (UID: \"af6b62a3-7329-456a-8ae7-bbd111be156c\") " Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.304125 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "af6b62a3-7329-456a-8ae7-bbd111be156c" (UID: "af6b62a3-7329-456a-8ae7-bbd111be156c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.304564 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6b62a3-7329-456a-8ae7-bbd111be156c-kube-api-access-cd6c7" (OuterVolumeSpecName: "kube-api-access-cd6c7") pod "af6b62a3-7329-456a-8ae7-bbd111be156c" (UID: "af6b62a3-7329-456a-8ae7-bbd111be156c"). InnerVolumeSpecName "kube-api-access-cd6c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.322636 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af6b62a3-7329-456a-8ae7-bbd111be156c" (UID: "af6b62a3-7329-456a-8ae7-bbd111be156c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.341556 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-config-data" (OuterVolumeSpecName: "config-data") pod "af6b62a3-7329-456a-8ae7-bbd111be156c" (UID: "af6b62a3-7329-456a-8ae7-bbd111be156c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.401201 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd6c7\" (UniqueName: \"kubernetes.io/projected/af6b62a3-7329-456a-8ae7-bbd111be156c-kube-api-access-cd6c7\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.401240 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.401252 4834 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.401262 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6b62a3-7329-456a-8ae7-bbd111be156c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.810488 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483521-9g5jz" event={"ID":"af6b62a3-7329-456a-8ae7-bbd111be156c","Type":"ContainerDied","Data":"e5fcec23b37a8267e8e16f0d77473b4d1b03498d3d85642fdce1b1fa35a6ba71"} Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.810542 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5fcec23b37a8267e8e16f0d77473b4d1b03498d3d85642fdce1b1fa35a6ba71" Jan 21 16:01:05 crc kubenswrapper[4834]: I0121 16:01:05.810596 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483521-9g5jz" Jan 21 16:01:07 crc kubenswrapper[4834]: I0121 16:01:07.057714 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8zx8w"] Jan 21 16:01:07 crc kubenswrapper[4834]: I0121 16:01:07.064353 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8zx8w"] Jan 21 16:01:08 crc kubenswrapper[4834]: I0121 16:01:08.335427 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3421b9b-83cb-488d-8b6c-f1e6d7303ccd" path="/var/lib/kubelet/pods/e3421b9b-83cb-488d-8b6c-f1e6d7303ccd/volumes" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.590537 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hkjr6"] Jan 21 16:01:45 crc kubenswrapper[4834]: E0121 16:01:45.591361 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6b62a3-7329-456a-8ae7-bbd111be156c" containerName="keystone-cron" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.591374 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6b62a3-7329-456a-8ae7-bbd111be156c" containerName="keystone-cron" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.591532 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6b62a3-7329-456a-8ae7-bbd111be156c" containerName="keystone-cron" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.592101 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.601487 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-497c-account-create-update-ll5ml"] Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.602660 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.604132 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.615994 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hkjr6"] Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.622206 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-497c-account-create-update-ll5ml"] Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.683129 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkhlf\" (UniqueName: \"kubernetes.io/projected/af90d09b-da72-4ca5-bc92-3909ba5ac898-kube-api-access-fkhlf\") pod \"barbican-db-create-hkjr6\" (UID: \"af90d09b-da72-4ca5-bc92-3909ba5ac898\") " pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.683203 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e713393e-fda6-487a-988d-971d2c270a65-operator-scripts\") pod \"barbican-497c-account-create-update-ll5ml\" (UID: \"e713393e-fda6-487a-988d-971d2c270a65\") " pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.683255 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jslvt\" (UniqueName: \"kubernetes.io/projected/e713393e-fda6-487a-988d-971d2c270a65-kube-api-access-jslvt\") pod \"barbican-497c-account-create-update-ll5ml\" (UID: \"e713393e-fda6-487a-988d-971d2c270a65\") " pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.683299 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af90d09b-da72-4ca5-bc92-3909ba5ac898-operator-scripts\") pod \"barbican-db-create-hkjr6\" (UID: \"af90d09b-da72-4ca5-bc92-3909ba5ac898\") " pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.785202 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkhlf\" (UniqueName: \"kubernetes.io/projected/af90d09b-da72-4ca5-bc92-3909ba5ac898-kube-api-access-fkhlf\") pod \"barbican-db-create-hkjr6\" (UID: \"af90d09b-da72-4ca5-bc92-3909ba5ac898\") " pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.785280 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e713393e-fda6-487a-988d-971d2c270a65-operator-scripts\") pod \"barbican-497c-account-create-update-ll5ml\" (UID: \"e713393e-fda6-487a-988d-971d2c270a65\") " pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.785334 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jslvt\" (UniqueName: \"kubernetes.io/projected/e713393e-fda6-487a-988d-971d2c270a65-kube-api-access-jslvt\") pod \"barbican-497c-account-create-update-ll5ml\" (UID: \"e713393e-fda6-487a-988d-971d2c270a65\") " pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.785369 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af90d09b-da72-4ca5-bc92-3909ba5ac898-operator-scripts\") pod \"barbican-db-create-hkjr6\" (UID: \"af90d09b-da72-4ca5-bc92-3909ba5ac898\") " pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.786150 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e713393e-fda6-487a-988d-971d2c270a65-operator-scripts\") pod \"barbican-497c-account-create-update-ll5ml\" (UID: \"e713393e-fda6-487a-988d-971d2c270a65\") " pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.786686 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af90d09b-da72-4ca5-bc92-3909ba5ac898-operator-scripts\") pod \"barbican-db-create-hkjr6\" (UID: \"af90d09b-da72-4ca5-bc92-3909ba5ac898\") " pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.803074 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkhlf\" (UniqueName: \"kubernetes.io/projected/af90d09b-da72-4ca5-bc92-3909ba5ac898-kube-api-access-fkhlf\") pod \"barbican-db-create-hkjr6\" (UID: \"af90d09b-da72-4ca5-bc92-3909ba5ac898\") " pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.805264 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jslvt\" (UniqueName: \"kubernetes.io/projected/e713393e-fda6-487a-988d-971d2c270a65-kube-api-access-jslvt\") pod \"barbican-497c-account-create-update-ll5ml\" (UID: \"e713393e-fda6-487a-988d-971d2c270a65\") " pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.920533 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:45 crc kubenswrapper[4834]: I0121 16:01:45.935210 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:46 crc kubenswrapper[4834]: I0121 16:01:46.386414 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hkjr6"] Jan 21 16:01:46 crc kubenswrapper[4834]: I0121 16:01:46.451183 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-497c-account-create-update-ll5ml"] Jan 21 16:01:46 crc kubenswrapper[4834]: W0121 16:01:46.456122 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode713393e_fda6_487a_988d_971d2c270a65.slice/crio-32c2330e73565db8d26d6e9dcccba54e509231536e2690080f0c3666f5ce7ded WatchSource:0}: Error finding container 32c2330e73565db8d26d6e9dcccba54e509231536e2690080f0c3666f5ce7ded: Status 404 returned error can't find the container with id 32c2330e73565db8d26d6e9dcccba54e509231536e2690080f0c3666f5ce7ded Jan 21 16:01:47 crc kubenswrapper[4834]: I0121 16:01:47.114005 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:01:47 crc kubenswrapper[4834]: I0121 16:01:47.114357 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:01:47 crc kubenswrapper[4834]: I0121 16:01:47.151113 4834 generic.go:334] "Generic (PLEG): container finished" podID="af90d09b-da72-4ca5-bc92-3909ba5ac898" containerID="1d57fbeeb6983efb82a4a8767e639471c58f74be4c1cdf65c0e9c2c4b06a71f8" exitCode=0 Jan 21 16:01:47 crc kubenswrapper[4834]: I0121 16:01:47.151252 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hkjr6" event={"ID":"af90d09b-da72-4ca5-bc92-3909ba5ac898","Type":"ContainerDied","Data":"1d57fbeeb6983efb82a4a8767e639471c58f74be4c1cdf65c0e9c2c4b06a71f8"} Jan 21 16:01:47 crc kubenswrapper[4834]: I0121 16:01:47.151326 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hkjr6" event={"ID":"af90d09b-da72-4ca5-bc92-3909ba5ac898","Type":"ContainerStarted","Data":"99f3b528050b023b6d29b761c19beea04ea6ac42c749108b1a11ab3587492309"} Jan 21 16:01:47 crc kubenswrapper[4834]: I0121 16:01:47.153884 4834 generic.go:334] "Generic (PLEG): container finished" podID="e713393e-fda6-487a-988d-971d2c270a65" containerID="296c4478ec54693a3f22f64a6ffeeab04017ad37b38c60175087f62f3f82da76" exitCode=0 Jan 21 16:01:47 crc kubenswrapper[4834]: I0121 16:01:47.153994 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-497c-account-create-update-ll5ml" event={"ID":"e713393e-fda6-487a-988d-971d2c270a65","Type":"ContainerDied","Data":"296c4478ec54693a3f22f64a6ffeeab04017ad37b38c60175087f62f3f82da76"} Jan 21 16:01:47 crc kubenswrapper[4834]: I0121 16:01:47.154079 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-497c-account-create-update-ll5ml" event={"ID":"e713393e-fda6-487a-988d-971d2c270a65","Type":"ContainerStarted","Data":"32c2330e73565db8d26d6e9dcccba54e509231536e2690080f0c3666f5ce7ded"} Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.558018 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.568244 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.632894 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af90d09b-da72-4ca5-bc92-3909ba5ac898-operator-scripts\") pod \"af90d09b-da72-4ca5-bc92-3909ba5ac898\" (UID: \"af90d09b-da72-4ca5-bc92-3909ba5ac898\") " Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.632982 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkhlf\" (UniqueName: \"kubernetes.io/projected/af90d09b-da72-4ca5-bc92-3909ba5ac898-kube-api-access-fkhlf\") pod \"af90d09b-da72-4ca5-bc92-3909ba5ac898\" (UID: \"af90d09b-da72-4ca5-bc92-3909ba5ac898\") " Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.633051 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e713393e-fda6-487a-988d-971d2c270a65-operator-scripts\") pod \"e713393e-fda6-487a-988d-971d2c270a65\" (UID: \"e713393e-fda6-487a-988d-971d2c270a65\") " Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.633140 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jslvt\" (UniqueName: \"kubernetes.io/projected/e713393e-fda6-487a-988d-971d2c270a65-kube-api-access-jslvt\") pod \"e713393e-fda6-487a-988d-971d2c270a65\" (UID: \"e713393e-fda6-487a-988d-971d2c270a65\") " Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.633945 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af90d09b-da72-4ca5-bc92-3909ba5ac898-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af90d09b-da72-4ca5-bc92-3909ba5ac898" (UID: "af90d09b-da72-4ca5-bc92-3909ba5ac898"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.634394 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e713393e-fda6-487a-988d-971d2c270a65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e713393e-fda6-487a-988d-971d2c270a65" (UID: "e713393e-fda6-487a-988d-971d2c270a65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.639998 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e713393e-fda6-487a-988d-971d2c270a65-kube-api-access-jslvt" (OuterVolumeSpecName: "kube-api-access-jslvt") pod "e713393e-fda6-487a-988d-971d2c270a65" (UID: "e713393e-fda6-487a-988d-971d2c270a65"). InnerVolumeSpecName "kube-api-access-jslvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.640190 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af90d09b-da72-4ca5-bc92-3909ba5ac898-kube-api-access-fkhlf" (OuterVolumeSpecName: "kube-api-access-fkhlf") pod "af90d09b-da72-4ca5-bc92-3909ba5ac898" (UID: "af90d09b-da72-4ca5-bc92-3909ba5ac898"). InnerVolumeSpecName "kube-api-access-fkhlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.735221 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af90d09b-da72-4ca5-bc92-3909ba5ac898-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.735265 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkhlf\" (UniqueName: \"kubernetes.io/projected/af90d09b-da72-4ca5-bc92-3909ba5ac898-kube-api-access-fkhlf\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.735281 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e713393e-fda6-487a-988d-971d2c270a65-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:48 crc kubenswrapper[4834]: I0121 16:01:48.735293 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jslvt\" (UniqueName: \"kubernetes.io/projected/e713393e-fda6-487a-988d-971d2c270a65-kube-api-access-jslvt\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:49 crc kubenswrapper[4834]: I0121 16:01:49.170280 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-497c-account-create-update-ll5ml" event={"ID":"e713393e-fda6-487a-988d-971d2c270a65","Type":"ContainerDied","Data":"32c2330e73565db8d26d6e9dcccba54e509231536e2690080f0c3666f5ce7ded"} Jan 21 16:01:49 crc kubenswrapper[4834]: I0121 16:01:49.170419 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c2330e73565db8d26d6e9dcccba54e509231536e2690080f0c3666f5ce7ded" Jan 21 16:01:49 crc kubenswrapper[4834]: I0121 16:01:49.170537 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-497c-account-create-update-ll5ml" Jan 21 16:01:49 crc kubenswrapper[4834]: I0121 16:01:49.171703 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hkjr6" event={"ID":"af90d09b-da72-4ca5-bc92-3909ba5ac898","Type":"ContainerDied","Data":"99f3b528050b023b6d29b761c19beea04ea6ac42c749108b1a11ab3587492309"} Jan 21 16:01:49 crc kubenswrapper[4834]: I0121 16:01:49.171734 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99f3b528050b023b6d29b761c19beea04ea6ac42c749108b1a11ab3587492309" Jan 21 16:01:49 crc kubenswrapper[4834]: I0121 16:01:49.171762 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hkjr6" Jan 21 16:01:50 crc kubenswrapper[4834]: I0121 16:01:50.971368 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wvv9k"] Jan 21 16:01:50 crc kubenswrapper[4834]: E0121 16:01:50.972199 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e713393e-fda6-487a-988d-971d2c270a65" containerName="mariadb-account-create-update" Jan 21 16:01:50 crc kubenswrapper[4834]: I0121 16:01:50.972218 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e713393e-fda6-487a-988d-971d2c270a65" containerName="mariadb-account-create-update" Jan 21 16:01:50 crc kubenswrapper[4834]: E0121 16:01:50.972240 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af90d09b-da72-4ca5-bc92-3909ba5ac898" containerName="mariadb-database-create" Jan 21 16:01:50 crc kubenswrapper[4834]: I0121 16:01:50.972248 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="af90d09b-da72-4ca5-bc92-3909ba5ac898" containerName="mariadb-database-create" Jan 21 16:01:50 crc kubenswrapper[4834]: I0121 16:01:50.972458 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e713393e-fda6-487a-988d-971d2c270a65" containerName="mariadb-account-create-update" Jan 21 16:01:50 crc kubenswrapper[4834]: I0121 16:01:50.972477 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="af90d09b-da72-4ca5-bc92-3909ba5ac898" containerName="mariadb-database-create" Jan 21 16:01:50 crc kubenswrapper[4834]: I0121 16:01:50.973167 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:50 crc kubenswrapper[4834]: I0121 16:01:50.975984 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s5sf7" Jan 21 16:01:50 crc kubenswrapper[4834]: I0121 16:01:50.976848 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:01:50 crc kubenswrapper[4834]: I0121 16:01:50.992523 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wvv9k"] Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.074201 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p29fx\" (UniqueName: \"kubernetes.io/projected/18cc8860-1899-4cd2-af4f-25a4ea9ef189-kube-api-access-p29fx\") pod \"barbican-db-sync-wvv9k\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.074280 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-db-sync-config-data\") pod \"barbican-db-sync-wvv9k\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.074340 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-combined-ca-bundle\") pod \"barbican-db-sync-wvv9k\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.176273 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-db-sync-config-data\") pod \"barbican-db-sync-wvv9k\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.176345 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-combined-ca-bundle\") pod \"barbican-db-sync-wvv9k\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.176426 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p29fx\" (UniqueName: \"kubernetes.io/projected/18cc8860-1899-4cd2-af4f-25a4ea9ef189-kube-api-access-p29fx\") pod \"barbican-db-sync-wvv9k\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.182287 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-db-sync-config-data\") pod \"barbican-db-sync-wvv9k\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.183464 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-combined-ca-bundle\") pod \"barbican-db-sync-wvv9k\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.194400 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p29fx\" (UniqueName: \"kubernetes.io/projected/18cc8860-1899-4cd2-af4f-25a4ea9ef189-kube-api-access-p29fx\") pod \"barbican-db-sync-wvv9k\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.292658 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:51 crc kubenswrapper[4834]: I0121 16:01:51.725790 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wvv9k"] Jan 21 16:01:52 crc kubenswrapper[4834]: I0121 16:01:52.196803 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wvv9k" event={"ID":"18cc8860-1899-4cd2-af4f-25a4ea9ef189","Type":"ContainerStarted","Data":"914d4184b08429f5fcee563f112b3c50f39ea569d5107defb27ceceea7db2e91"} Jan 21 16:01:53 crc kubenswrapper[4834]: I0121 16:01:53.206801 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wvv9k" event={"ID":"18cc8860-1899-4cd2-af4f-25a4ea9ef189","Type":"ContainerStarted","Data":"36408a5008327e613c172e19e71cb77247a75649a61bd4116938644799b4f560"} Jan 21 16:01:53 crc kubenswrapper[4834]: I0121 16:01:53.227804 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wvv9k" podStartSLOduration=3.22777711 podStartE2EDuration="3.22777711s" podCreationTimestamp="2026-01-21 16:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:01:53.22172243 +0000 UTC m=+5459.196071505" watchObservedRunningTime="2026-01-21 16:01:53.22777711 +0000 UTC m=+5459.202126175" Jan 21 16:01:56 crc kubenswrapper[4834]: I0121 16:01:56.242092 4834 generic.go:334] "Generic (PLEG): container finished" podID="18cc8860-1899-4cd2-af4f-25a4ea9ef189" containerID="36408a5008327e613c172e19e71cb77247a75649a61bd4116938644799b4f560" exitCode=0 Jan 21 16:01:56 crc kubenswrapper[4834]: I0121 16:01:56.242360 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wvv9k" event={"ID":"18cc8860-1899-4cd2-af4f-25a4ea9ef189","Type":"ContainerDied","Data":"36408a5008327e613c172e19e71cb77247a75649a61bd4116938644799b4f560"} Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.560710 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.684437 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-combined-ca-bundle\") pod \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.684642 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-db-sync-config-data\") pod \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.685334 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p29fx\" (UniqueName: \"kubernetes.io/projected/18cc8860-1899-4cd2-af4f-25a4ea9ef189-kube-api-access-p29fx\") pod \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\" (UID: \"18cc8860-1899-4cd2-af4f-25a4ea9ef189\") " Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.708243 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "18cc8860-1899-4cd2-af4f-25a4ea9ef189" (UID: "18cc8860-1899-4cd2-af4f-25a4ea9ef189"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.709380 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18cc8860-1899-4cd2-af4f-25a4ea9ef189-kube-api-access-p29fx" (OuterVolumeSpecName: "kube-api-access-p29fx") pod "18cc8860-1899-4cd2-af4f-25a4ea9ef189" (UID: "18cc8860-1899-4cd2-af4f-25a4ea9ef189"). InnerVolumeSpecName "kube-api-access-p29fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.723504 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18cc8860-1899-4cd2-af4f-25a4ea9ef189" (UID: "18cc8860-1899-4cd2-af4f-25a4ea9ef189"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.787177 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p29fx\" (UniqueName: \"kubernetes.io/projected/18cc8860-1899-4cd2-af4f-25a4ea9ef189-kube-api-access-p29fx\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.787243 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:57 crc kubenswrapper[4834]: I0121 16:01:57.787261 4834 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/18cc8860-1899-4cd2-af4f-25a4ea9ef189-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.264498 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wvv9k" event={"ID":"18cc8860-1899-4cd2-af4f-25a4ea9ef189","Type":"ContainerDied","Data":"914d4184b08429f5fcee563f112b3c50f39ea569d5107defb27ceceea7db2e91"} Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.264782 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="914d4184b08429f5fcee563f112b3c50f39ea569d5107defb27ceceea7db2e91" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.264558 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wvv9k" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.520188 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7f7f7bcbb9-k58jh"] Jan 21 16:01:58 crc kubenswrapper[4834]: E0121 16:01:58.520615 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cc8860-1899-4cd2-af4f-25a4ea9ef189" containerName="barbican-db-sync" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.520637 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cc8860-1899-4cd2-af4f-25a4ea9ef189" containerName="barbican-db-sync" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.520853 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cc8860-1899-4cd2-af4f-25a4ea9ef189" containerName="barbican-db-sync" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.521892 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.529150 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.529374 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65dd49b774-nw7wl"] Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.531100 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.531361 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.531545 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s5sf7" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.535758 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.571893 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f7f7bcbb9-k58jh"] Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.596961 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65dd49b774-nw7wl"] Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.601743 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80665cd8-04f6-4768-ba52-98354aad364d-config-data-custom\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.601851 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-config-data\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.601892 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-logs\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.601921 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n49r\" (UniqueName: \"kubernetes.io/projected/80665cd8-04f6-4768-ba52-98354aad364d-kube-api-access-7n49r\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.601965 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-combined-ca-bundle\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.602007 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80665cd8-04f6-4768-ba52-98354aad364d-combined-ca-bundle\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.602064 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24dg\" (UniqueName: \"kubernetes.io/projected/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-kube-api-access-x24dg\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.602140 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80665cd8-04f6-4768-ba52-98354aad364d-logs\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.602173 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-config-data-custom\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.602215 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80665cd8-04f6-4768-ba52-98354aad364d-config-data\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.625972 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5884547677-c496g"] Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.627915 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.636044 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5884547677-c496g"] Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703385 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80665cd8-04f6-4768-ba52-98354aad364d-logs\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703455 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-config-data-custom\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703509 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80665cd8-04f6-4768-ba52-98354aad364d-config-data\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703548 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80665cd8-04f6-4768-ba52-98354aad364d-config-data-custom\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703576 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-nb\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703630 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-config-data\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703682 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-logs\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703710 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n49r\" (UniqueName: \"kubernetes.io/projected/80665cd8-04f6-4768-ba52-98354aad364d-kube-api-access-7n49r\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703739 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-combined-ca-bundle\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703780 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80665cd8-04f6-4768-ba52-98354aad364d-combined-ca-bundle\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703834 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2bmv\" (UniqueName: \"kubernetes.io/projected/3277e37c-86cb-4662-98af-2ffdbd16a064-kube-api-access-s2bmv\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703868 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x24dg\" (UniqueName: \"kubernetes.io/projected/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-kube-api-access-x24dg\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.703900 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-config\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.704022 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-dns-svc\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.704062 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-sb\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.704615 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80665cd8-04f6-4768-ba52-98354aad364d-logs\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.708904 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-logs\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.710509 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-combined-ca-bundle\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.713357 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80665cd8-04f6-4768-ba52-98354aad364d-combined-ca-bundle\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.719161 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80665cd8-04f6-4768-ba52-98354aad364d-config-data\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.738420 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-config-data-custom\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.750621 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24dg\" (UniqueName: \"kubernetes.io/projected/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-kube-api-access-x24dg\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.758614 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80665cd8-04f6-4768-ba52-98354aad364d-config-data-custom\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.759944 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n49r\" (UniqueName: \"kubernetes.io/projected/80665cd8-04f6-4768-ba52-98354aad364d-kube-api-access-7n49r\") pod \"barbican-worker-7f7f7bcbb9-k58jh\" (UID: \"80665cd8-04f6-4768-ba52-98354aad364d\") " pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.775181 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7-config-data\") pod \"barbican-keystone-listener-65dd49b774-nw7wl\" (UID: \"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7\") " pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.806093 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2bmv\" (UniqueName: \"kubernetes.io/projected/3277e37c-86cb-4662-98af-2ffdbd16a064-kube-api-access-s2bmv\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.806473 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-config\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.807386 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-dns-svc\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.807475 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-sb\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.807654 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-config\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.807691 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-nb\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.808375 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-dns-svc\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.808706 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-nb\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.809063 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-sb\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.837427 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5fbcdf9bfb-bhr5k"] Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.839271 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.848739 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.863029 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2bmv\" (UniqueName: \"kubernetes.io/projected/3277e37c-86cb-4662-98af-2ffdbd16a064-kube-api-access-s2bmv\") pod \"dnsmasq-dns-5884547677-c496g\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.867645 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.869786 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fbcdf9bfb-bhr5k"] Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.898388 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.910877 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bbd0de-67a6-4383-b3f8-df8eebba1442-combined-ca-bundle\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.911106 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4bbd0de-67a6-4383-b3f8-df8eebba1442-config-data\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.911149 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4bbd0de-67a6-4383-b3f8-df8eebba1442-logs\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.911234 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spr4x\" (UniqueName: \"kubernetes.io/projected/a4bbd0de-67a6-4383-b3f8-df8eebba1442-kube-api-access-spr4x\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.911270 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4bbd0de-67a6-4383-b3f8-df8eebba1442-config-data-custom\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:58 crc kubenswrapper[4834]: I0121 16:01:58.953643 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.012637 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spr4x\" (UniqueName: \"kubernetes.io/projected/a4bbd0de-67a6-4383-b3f8-df8eebba1442-kube-api-access-spr4x\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.012704 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4bbd0de-67a6-4383-b3f8-df8eebba1442-config-data-custom\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.012753 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bbd0de-67a6-4383-b3f8-df8eebba1442-combined-ca-bundle\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.012812 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4bbd0de-67a6-4383-b3f8-df8eebba1442-config-data\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.012840 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4bbd0de-67a6-4383-b3f8-df8eebba1442-logs\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.013285 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4bbd0de-67a6-4383-b3f8-df8eebba1442-logs\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.020073 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4bbd0de-67a6-4383-b3f8-df8eebba1442-config-data-custom\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.021283 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4bbd0de-67a6-4383-b3f8-df8eebba1442-config-data\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.023796 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bbd0de-67a6-4383-b3f8-df8eebba1442-combined-ca-bundle\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.052450 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spr4x\" (UniqueName: \"kubernetes.io/projected/a4bbd0de-67a6-4383-b3f8-df8eebba1442-kube-api-access-spr4x\") pod \"barbican-api-5fbcdf9bfb-bhr5k\" (UID: \"a4bbd0de-67a6-4383-b3f8-df8eebba1442\") " pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.290902 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.528531 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f7f7bcbb9-k58jh"] Jan 21 16:01:59 crc kubenswrapper[4834]: W0121 16:01:59.546012 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80665cd8_04f6_4768_ba52_98354aad364d.slice/crio-3c35fe2def84ab46098e9be73e97c9cd6ea3585f3939ab07ef1d8132f304ee7a WatchSource:0}: Error finding container 3c35fe2def84ab46098e9be73e97c9cd6ea3585f3939ab07ef1d8132f304ee7a: Status 404 returned error can't find the container with id 3c35fe2def84ab46098e9be73e97c9cd6ea3585f3939ab07ef1d8132f304ee7a Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.599644 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5884547677-c496g"] Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.620213 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65dd49b774-nw7wl"] Jan 21 16:01:59 crc kubenswrapper[4834]: I0121 16:01:59.773770 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fbcdf9bfb-bhr5k"] Jan 21 16:01:59 crc kubenswrapper[4834]: W0121 16:01:59.773839 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4bbd0de_67a6_4383_b3f8_df8eebba1442.slice/crio-77bccc362c0da6dc9e84a8a31da172d166dd638a115744a5c6354c7b9fe58eca WatchSource:0}: Error finding container 77bccc362c0da6dc9e84a8a31da172d166dd638a115744a5c6354c7b9fe58eca: Status 404 returned error can't find the container with id 77bccc362c0da6dc9e84a8a31da172d166dd638a115744a5c6354c7b9fe58eca Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.308261 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" event={"ID":"80665cd8-04f6-4768-ba52-98354aad364d","Type":"ContainerStarted","Data":"8c2edb91adee8190cc0591896ef68b5934e8ffaee1765e88452e80fa4ae2fe27"} Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.308600 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" event={"ID":"80665cd8-04f6-4768-ba52-98354aad364d","Type":"ContainerStarted","Data":"b4ce84f1029f647d8e5b4dcc6f55a9faa6841ebcfaa55487c90daf62cd066081"} Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.308610 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" event={"ID":"80665cd8-04f6-4768-ba52-98354aad364d","Type":"ContainerStarted","Data":"3c35fe2def84ab46098e9be73e97c9cd6ea3585f3939ab07ef1d8132f304ee7a"} Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.311553 4834 generic.go:334] "Generic (PLEG): container finished" podID="3277e37c-86cb-4662-98af-2ffdbd16a064" containerID="9a67b14dc83436e1300ee9354f16fe91ff319521cca9e6bbb873c9a90c6fb4cd" exitCode=0 Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.311622 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5884547677-c496g" event={"ID":"3277e37c-86cb-4662-98af-2ffdbd16a064","Type":"ContainerDied","Data":"9a67b14dc83436e1300ee9354f16fe91ff319521cca9e6bbb873c9a90c6fb4cd"} Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.311637 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5884547677-c496g" event={"ID":"3277e37c-86cb-4662-98af-2ffdbd16a064","Type":"ContainerStarted","Data":"1e3632173f9e68595a27e6946e8eb0872f2d06466d2a7f0fe408f1fbb80f6954"} Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.313901 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" event={"ID":"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7","Type":"ContainerStarted","Data":"d97efdcd3028a72c4b57bfe92c3a565ed217ee3655d365afa60e5ada5c89cf29"} Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.313953 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" event={"ID":"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7","Type":"ContainerStarted","Data":"553a6d7b835d38c24b391898d4621711af6d0e6d0dd98714e67abffe5a5975d3"} Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.318099 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" event={"ID":"a4bbd0de-67a6-4383-b3f8-df8eebba1442","Type":"ContainerStarted","Data":"2bafdf6e8892ce16f58eead078b130f90161e34ab236ff953cda3940034236a8"} Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.318134 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" event={"ID":"a4bbd0de-67a6-4383-b3f8-df8eebba1442","Type":"ContainerStarted","Data":"77bccc362c0da6dc9e84a8a31da172d166dd638a115744a5c6354c7b9fe58eca"} Jan 21 16:02:00 crc kubenswrapper[4834]: I0121 16:02:00.334495 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7f7f7bcbb9-k58jh" podStartSLOduration=2.334465152 podStartE2EDuration="2.334465152s" podCreationTimestamp="2026-01-21 16:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:00.328388632 +0000 UTC m=+5466.302737687" watchObservedRunningTime="2026-01-21 16:02:00.334465152 +0000 UTC m=+5466.308814207" Jan 21 16:02:01 crc kubenswrapper[4834]: I0121 16:02:01.331261 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5884547677-c496g" event={"ID":"3277e37c-86cb-4662-98af-2ffdbd16a064","Type":"ContainerStarted","Data":"365ca94985ca4696803efaaa0abd2a5c923fedf46aeb1e102ae101f5b41e9ab3"} Jan 21 16:02:01 crc kubenswrapper[4834]: I0121 16:02:01.331640 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:02:01 crc kubenswrapper[4834]: I0121 16:02:01.334438 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" event={"ID":"b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7","Type":"ContainerStarted","Data":"3b89125206c187c5cdb4e95b42add460cebeca987937cbc14dc7709edccade97"} Jan 21 16:02:01 crc kubenswrapper[4834]: I0121 16:02:01.337045 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" event={"ID":"a4bbd0de-67a6-4383-b3f8-df8eebba1442","Type":"ContainerStarted","Data":"48c8b966d87117dc67a3c196686cefc32120c65d7fb65ce956092256aa66dce9"} Jan 21 16:02:01 crc kubenswrapper[4834]: I0121 16:02:01.337181 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:02:01 crc kubenswrapper[4834]: I0121 16:02:01.337199 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:02:01 crc kubenswrapper[4834]: I0121 16:02:01.356206 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5884547677-c496g" podStartSLOduration=3.356178896 podStartE2EDuration="3.356178896s" podCreationTimestamp="2026-01-21 16:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:01.346405682 +0000 UTC m=+5467.320754727" watchObservedRunningTime="2026-01-21 16:02:01.356178896 +0000 UTC m=+5467.330527941" Jan 21 16:02:01 crc kubenswrapper[4834]: I0121 16:02:01.363756 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65dd49b774-nw7wl" podStartSLOduration=3.363735953 podStartE2EDuration="3.363735953s" podCreationTimestamp="2026-01-21 16:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:01.361895904 +0000 UTC m=+5467.336244959" watchObservedRunningTime="2026-01-21 16:02:01.363735953 +0000 UTC m=+5467.338084998" Jan 21 16:02:01 crc kubenswrapper[4834]: I0121 16:02:01.387811 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" podStartSLOduration=3.387788973 podStartE2EDuration="3.387788973s" podCreationTimestamp="2026-01-21 16:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:01.383275932 +0000 UTC m=+5467.357624977" watchObservedRunningTime="2026-01-21 16:02:01.387788973 +0000 UTC m=+5467.362138018" Jan 21 16:02:07 crc kubenswrapper[4834]: I0121 16:02:07.744488 4834 scope.go:117] "RemoveContainer" containerID="7bc2aedaa589d1b68d5e229ae9b7802a03b4cde0d8d7bd68ad58aa3e602d02b1" Jan 21 16:02:08 crc kubenswrapper[4834]: I0121 16:02:08.955069 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.033041 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f4bfbbb7-vm62h"] Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.033372 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" podUID="40a545af-34aa-4200-8e15-ee9b364da472" containerName="dnsmasq-dns" containerID="cri-o://d90b20c899f69c46100fee828fdeaff3a4b6dd27a552768be89f9481c109f026" gracePeriod=10 Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.068416 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" podUID="40a545af-34aa-4200-8e15-ee9b364da472" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.14:5353: connect: connection refused" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.406132 4834 generic.go:334] "Generic (PLEG): container finished" podID="40a545af-34aa-4200-8e15-ee9b364da472" containerID="d90b20c899f69c46100fee828fdeaff3a4b6dd27a552768be89f9481c109f026" exitCode=0 Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.406483 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" event={"ID":"40a545af-34aa-4200-8e15-ee9b364da472","Type":"ContainerDied","Data":"d90b20c899f69c46100fee828fdeaff3a4b6dd27a552768be89f9481c109f026"} Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.566058 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.717766 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-dns-svc\") pod \"40a545af-34aa-4200-8e15-ee9b364da472\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.717969 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-sb\") pod \"40a545af-34aa-4200-8e15-ee9b364da472\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.718005 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-config\") pod \"40a545af-34aa-4200-8e15-ee9b364da472\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.718029 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-nb\") pod \"40a545af-34aa-4200-8e15-ee9b364da472\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.718104 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grmsw\" (UniqueName: \"kubernetes.io/projected/40a545af-34aa-4200-8e15-ee9b364da472-kube-api-access-grmsw\") pod \"40a545af-34aa-4200-8e15-ee9b364da472\" (UID: \"40a545af-34aa-4200-8e15-ee9b364da472\") " Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.739540 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a545af-34aa-4200-8e15-ee9b364da472-kube-api-access-grmsw" (OuterVolumeSpecName: "kube-api-access-grmsw") pod "40a545af-34aa-4200-8e15-ee9b364da472" (UID: "40a545af-34aa-4200-8e15-ee9b364da472"). InnerVolumeSpecName "kube-api-access-grmsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.765362 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "40a545af-34aa-4200-8e15-ee9b364da472" (UID: "40a545af-34aa-4200-8e15-ee9b364da472"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.765980 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "40a545af-34aa-4200-8e15-ee9b364da472" (UID: "40a545af-34aa-4200-8e15-ee9b364da472"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.774548 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-config" (OuterVolumeSpecName: "config") pod "40a545af-34aa-4200-8e15-ee9b364da472" (UID: "40a545af-34aa-4200-8e15-ee9b364da472"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.777424 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "40a545af-34aa-4200-8e15-ee9b364da472" (UID: "40a545af-34aa-4200-8e15-ee9b364da472"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.820609 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.820643 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.820800 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.820812 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grmsw\" (UniqueName: \"kubernetes.io/projected/40a545af-34aa-4200-8e15-ee9b364da472-kube-api-access-grmsw\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:09 crc kubenswrapper[4834]: I0121 16:02:09.820823 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40a545af-34aa-4200-8e15-ee9b364da472-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:10 crc kubenswrapper[4834]: I0121 16:02:10.418097 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" event={"ID":"40a545af-34aa-4200-8e15-ee9b364da472","Type":"ContainerDied","Data":"1d4b7c7e1b260ec2721e9e01c86ea9386a805853df255bd0e3029669dd9056f9"} Jan 21 16:02:10 crc kubenswrapper[4834]: I0121 16:02:10.418151 4834 scope.go:117] "RemoveContainer" containerID="d90b20c899f69c46100fee828fdeaff3a4b6dd27a552768be89f9481c109f026" Jan 21 16:02:10 crc kubenswrapper[4834]: I0121 16:02:10.418269 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59f4bfbbb7-vm62h" Jan 21 16:02:10 crc kubenswrapper[4834]: I0121 16:02:10.448377 4834 scope.go:117] "RemoveContainer" containerID="8d0029d6421db15fa0ff87b37c3f699d3fa7b476753f02ff1eb481873602f7e6" Jan 21 16:02:10 crc kubenswrapper[4834]: I0121 16:02:10.449238 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59f4bfbbb7-vm62h"] Jan 21 16:02:10 crc kubenswrapper[4834]: I0121 16:02:10.456983 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59f4bfbbb7-vm62h"] Jan 21 16:02:10 crc kubenswrapper[4834]: I0121 16:02:10.847530 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:02:10 crc kubenswrapper[4834]: I0121 16:02:10.871733 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fbcdf9bfb-bhr5k" Jan 21 16:02:12 crc kubenswrapper[4834]: I0121 16:02:12.335961 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40a545af-34aa-4200-8e15-ee9b364da472" path="/var/lib/kubelet/pods/40a545af-34aa-4200-8e15-ee9b364da472/volumes" Jan 21 16:02:17 crc kubenswrapper[4834]: I0121 16:02:17.115393 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:02:17 crc kubenswrapper[4834]: I0121 16:02:17.116214 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.282773 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dxndl"] Jan 21 16:02:22 crc kubenswrapper[4834]: E0121 16:02:22.283740 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a545af-34aa-4200-8e15-ee9b364da472" containerName="init" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.283754 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a545af-34aa-4200-8e15-ee9b364da472" containerName="init" Jan 21 16:02:22 crc kubenswrapper[4834]: E0121 16:02:22.283780 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a545af-34aa-4200-8e15-ee9b364da472" containerName="dnsmasq-dns" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.283786 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a545af-34aa-4200-8e15-ee9b364da472" containerName="dnsmasq-dns" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.283952 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a545af-34aa-4200-8e15-ee9b364da472" containerName="dnsmasq-dns" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.284533 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.296304 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dxndl"] Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.376952 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fecabc-f24c-483d-85c8-d0069e888c53-operator-scripts\") pod \"neutron-db-create-dxndl\" (UID: \"95fecabc-f24c-483d-85c8-d0069e888c53\") " pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.377152 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtqt\" (UniqueName: \"kubernetes.io/projected/95fecabc-f24c-483d-85c8-d0069e888c53-kube-api-access-ldtqt\") pod \"neutron-db-create-dxndl\" (UID: \"95fecabc-f24c-483d-85c8-d0069e888c53\") " pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.479165 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fecabc-f24c-483d-85c8-d0069e888c53-operator-scripts\") pod \"neutron-db-create-dxndl\" (UID: \"95fecabc-f24c-483d-85c8-d0069e888c53\") " pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.479369 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtqt\" (UniqueName: \"kubernetes.io/projected/95fecabc-f24c-483d-85c8-d0069e888c53-kube-api-access-ldtqt\") pod \"neutron-db-create-dxndl\" (UID: \"95fecabc-f24c-483d-85c8-d0069e888c53\") " pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.480848 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fecabc-f24c-483d-85c8-d0069e888c53-operator-scripts\") pod \"neutron-db-create-dxndl\" (UID: \"95fecabc-f24c-483d-85c8-d0069e888c53\") " pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.496737 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3667-account-create-update-64c8h"] Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.498224 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.500381 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.512165 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3667-account-create-update-64c8h"] Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.520848 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtqt\" (UniqueName: \"kubernetes.io/projected/95fecabc-f24c-483d-85c8-d0069e888c53-kube-api-access-ldtqt\") pod \"neutron-db-create-dxndl\" (UID: \"95fecabc-f24c-483d-85c8-d0069e888c53\") " pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.581021 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjdg\" (UniqueName: \"kubernetes.io/projected/53db6835-81f8-4c27-878f-1bee854eaead-kube-api-access-jbjdg\") pod \"neutron-3667-account-create-update-64c8h\" (UID: \"53db6835-81f8-4c27-878f-1bee854eaead\") " pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.581422 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53db6835-81f8-4c27-878f-1bee854eaead-operator-scripts\") pod \"neutron-3667-account-create-update-64c8h\" (UID: \"53db6835-81f8-4c27-878f-1bee854eaead\") " pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.607202 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.682606 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjdg\" (UniqueName: \"kubernetes.io/projected/53db6835-81f8-4c27-878f-1bee854eaead-kube-api-access-jbjdg\") pod \"neutron-3667-account-create-update-64c8h\" (UID: \"53db6835-81f8-4c27-878f-1bee854eaead\") " pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.682686 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53db6835-81f8-4c27-878f-1bee854eaead-operator-scripts\") pod \"neutron-3667-account-create-update-64c8h\" (UID: \"53db6835-81f8-4c27-878f-1bee854eaead\") " pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.683900 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53db6835-81f8-4c27-878f-1bee854eaead-operator-scripts\") pod \"neutron-3667-account-create-update-64c8h\" (UID: \"53db6835-81f8-4c27-878f-1bee854eaead\") " pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.703507 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjdg\" (UniqueName: \"kubernetes.io/projected/53db6835-81f8-4c27-878f-1bee854eaead-kube-api-access-jbjdg\") pod \"neutron-3667-account-create-update-64c8h\" (UID: \"53db6835-81f8-4c27-878f-1bee854eaead\") " pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:22 crc kubenswrapper[4834]: I0121 16:02:22.904913 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:23 crc kubenswrapper[4834]: I0121 16:02:23.047793 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dxndl"] Jan 21 16:02:23 crc kubenswrapper[4834]: I0121 16:02:23.335728 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3667-account-create-update-64c8h"] Jan 21 16:02:23 crc kubenswrapper[4834]: I0121 16:02:23.529159 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3667-account-create-update-64c8h" event={"ID":"53db6835-81f8-4c27-878f-1bee854eaead","Type":"ContainerStarted","Data":"c23b7e853714abdcc0c0487e4ad58762a5da224d04a598f56eab85e353e3b324"} Jan 21 16:02:23 crc kubenswrapper[4834]: I0121 16:02:23.531596 4834 generic.go:334] "Generic (PLEG): container finished" podID="95fecabc-f24c-483d-85c8-d0069e888c53" containerID="d21af324c2a5f5bd84ae0a8c47859ee9aa0d9404b8f1d57acf8af59ce5e32608" exitCode=0 Jan 21 16:02:23 crc kubenswrapper[4834]: I0121 16:02:23.531655 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dxndl" event={"ID":"95fecabc-f24c-483d-85c8-d0069e888c53","Type":"ContainerDied","Data":"d21af324c2a5f5bd84ae0a8c47859ee9aa0d9404b8f1d57acf8af59ce5e32608"} Jan 21 16:02:23 crc kubenswrapper[4834]: I0121 16:02:23.531741 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dxndl" event={"ID":"95fecabc-f24c-483d-85c8-d0069e888c53","Type":"ContainerStarted","Data":"dcc6724bdc10a0167261099421fb88ee471fa5a50ee704f1bc983e12b9b7f871"} Jan 21 16:02:24 crc kubenswrapper[4834]: I0121 16:02:24.540701 4834 generic.go:334] "Generic (PLEG): container finished" podID="53db6835-81f8-4c27-878f-1bee854eaead" containerID="8f6cc37133e8e2cc90631789e5433ee50fef3ae3ccd9cf95a82f788208dda0bc" exitCode=0 Jan 21 16:02:24 crc kubenswrapper[4834]: I0121 16:02:24.540804 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3667-account-create-update-64c8h" event={"ID":"53db6835-81f8-4c27-878f-1bee854eaead","Type":"ContainerDied","Data":"8f6cc37133e8e2cc90631789e5433ee50fef3ae3ccd9cf95a82f788208dda0bc"} Jan 21 16:02:24 crc kubenswrapper[4834]: I0121 16:02:24.856521 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:24 crc kubenswrapper[4834]: I0121 16:02:24.920238 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fecabc-f24c-483d-85c8-d0069e888c53-operator-scripts\") pod \"95fecabc-f24c-483d-85c8-d0069e888c53\" (UID: \"95fecabc-f24c-483d-85c8-d0069e888c53\") " Jan 21 16:02:24 crc kubenswrapper[4834]: I0121 16:02:24.920464 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldtqt\" (UniqueName: \"kubernetes.io/projected/95fecabc-f24c-483d-85c8-d0069e888c53-kube-api-access-ldtqt\") pod \"95fecabc-f24c-483d-85c8-d0069e888c53\" (UID: \"95fecabc-f24c-483d-85c8-d0069e888c53\") " Jan 21 16:02:24 crc kubenswrapper[4834]: I0121 16:02:24.921316 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fecabc-f24c-483d-85c8-d0069e888c53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95fecabc-f24c-483d-85c8-d0069e888c53" (UID: "95fecabc-f24c-483d-85c8-d0069e888c53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:24 crc kubenswrapper[4834]: I0121 16:02:24.925582 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fecabc-f24c-483d-85c8-d0069e888c53-kube-api-access-ldtqt" (OuterVolumeSpecName: "kube-api-access-ldtqt") pod "95fecabc-f24c-483d-85c8-d0069e888c53" (UID: "95fecabc-f24c-483d-85c8-d0069e888c53"). InnerVolumeSpecName "kube-api-access-ldtqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:02:25 crc kubenswrapper[4834]: I0121 16:02:25.022399 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fecabc-f24c-483d-85c8-d0069e888c53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:25 crc kubenswrapper[4834]: I0121 16:02:25.022433 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldtqt\" (UniqueName: \"kubernetes.io/projected/95fecabc-f24c-483d-85c8-d0069e888c53-kube-api-access-ldtqt\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:25 crc kubenswrapper[4834]: I0121 16:02:25.550394 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dxndl" event={"ID":"95fecabc-f24c-483d-85c8-d0069e888c53","Type":"ContainerDied","Data":"dcc6724bdc10a0167261099421fb88ee471fa5a50ee704f1bc983e12b9b7f871"} Jan 21 16:02:25 crc kubenswrapper[4834]: I0121 16:02:25.550706 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc6724bdc10a0167261099421fb88ee471fa5a50ee704f1bc983e12b9b7f871" Jan 21 16:02:25 crc kubenswrapper[4834]: I0121 16:02:25.550419 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dxndl" Jan 21 16:02:25 crc kubenswrapper[4834]: I0121 16:02:25.903017 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:26 crc kubenswrapper[4834]: I0121 16:02:26.042444 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53db6835-81f8-4c27-878f-1bee854eaead-operator-scripts\") pod \"53db6835-81f8-4c27-878f-1bee854eaead\" (UID: \"53db6835-81f8-4c27-878f-1bee854eaead\") " Jan 21 16:02:26 crc kubenswrapper[4834]: I0121 16:02:26.043141 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53db6835-81f8-4c27-878f-1bee854eaead-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53db6835-81f8-4c27-878f-1bee854eaead" (UID: "53db6835-81f8-4c27-878f-1bee854eaead"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:26 crc kubenswrapper[4834]: I0121 16:02:26.043524 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjdg\" (UniqueName: \"kubernetes.io/projected/53db6835-81f8-4c27-878f-1bee854eaead-kube-api-access-jbjdg\") pod \"53db6835-81f8-4c27-878f-1bee854eaead\" (UID: \"53db6835-81f8-4c27-878f-1bee854eaead\") " Jan 21 16:02:26 crc kubenswrapper[4834]: I0121 16:02:26.044684 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53db6835-81f8-4c27-878f-1bee854eaead-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:26 crc kubenswrapper[4834]: I0121 16:02:26.049684 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53db6835-81f8-4c27-878f-1bee854eaead-kube-api-access-jbjdg" (OuterVolumeSpecName: "kube-api-access-jbjdg") pod "53db6835-81f8-4c27-878f-1bee854eaead" (UID: "53db6835-81f8-4c27-878f-1bee854eaead"). InnerVolumeSpecName "kube-api-access-jbjdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:02:26 crc kubenswrapper[4834]: I0121 16:02:26.146737 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjdg\" (UniqueName: \"kubernetes.io/projected/53db6835-81f8-4c27-878f-1bee854eaead-kube-api-access-jbjdg\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:26 crc kubenswrapper[4834]: I0121 16:02:26.565815 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3667-account-create-update-64c8h" event={"ID":"53db6835-81f8-4c27-878f-1bee854eaead","Type":"ContainerDied","Data":"c23b7e853714abdcc0c0487e4ad58762a5da224d04a598f56eab85e353e3b324"} Jan 21 16:02:26 crc kubenswrapper[4834]: I0121 16:02:26.565857 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c23b7e853714abdcc0c0487e4ad58762a5da224d04a598f56eab85e353e3b324" Jan 21 16:02:26 crc kubenswrapper[4834]: I0121 16:02:26.565903 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3667-account-create-update-64c8h" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.655721 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-b7qq4"] Jan 21 16:02:27 crc kubenswrapper[4834]: E0121 16:02:27.656545 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fecabc-f24c-483d-85c8-d0069e888c53" containerName="mariadb-database-create" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.656559 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fecabc-f24c-483d-85c8-d0069e888c53" containerName="mariadb-database-create" Jan 21 16:02:27 crc kubenswrapper[4834]: E0121 16:02:27.656573 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53db6835-81f8-4c27-878f-1bee854eaead" containerName="mariadb-account-create-update" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.656580 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="53db6835-81f8-4c27-878f-1bee854eaead" containerName="mariadb-account-create-update" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.656741 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="53db6835-81f8-4c27-878f-1bee854eaead" containerName="mariadb-account-create-update" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.656759 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fecabc-f24c-483d-85c8-d0069e888c53" containerName="mariadb-database-create" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.657358 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.659268 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.659766 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.659865 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4l8qm" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.675030 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b7qq4"] Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.771779 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-combined-ca-bundle\") pod \"neutron-db-sync-b7qq4\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.771865 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxqv2\" (UniqueName: \"kubernetes.io/projected/57f3e6a2-443f-4851-93a7-835f9fa507ed-kube-api-access-rxqv2\") pod \"neutron-db-sync-b7qq4\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.771893 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-config\") pod \"neutron-db-sync-b7qq4\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.873832 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-config\") pod \"neutron-db-sync-b7qq4\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.873973 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-combined-ca-bundle\") pod \"neutron-db-sync-b7qq4\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.874034 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxqv2\" (UniqueName: \"kubernetes.io/projected/57f3e6a2-443f-4851-93a7-835f9fa507ed-kube-api-access-rxqv2\") pod \"neutron-db-sync-b7qq4\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.879151 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-config\") pod \"neutron-db-sync-b7qq4\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.881113 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-combined-ca-bundle\") pod \"neutron-db-sync-b7qq4\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.891176 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxqv2\" (UniqueName: \"kubernetes.io/projected/57f3e6a2-443f-4851-93a7-835f9fa507ed-kube-api-access-rxqv2\") pod \"neutron-db-sync-b7qq4\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:27 crc kubenswrapper[4834]: I0121 16:02:27.974780 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:28 crc kubenswrapper[4834]: I0121 16:02:28.406775 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b7qq4"] Jan 21 16:02:28 crc kubenswrapper[4834]: W0121 16:02:28.411084 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57f3e6a2_443f_4851_93a7_835f9fa507ed.slice/crio-3f66e53763940137b0f4119eb0c1a2e129bef831695f4a8e342b6fb9cd67045e WatchSource:0}: Error finding container 3f66e53763940137b0f4119eb0c1a2e129bef831695f4a8e342b6fb9cd67045e: Status 404 returned error can't find the container with id 3f66e53763940137b0f4119eb0c1a2e129bef831695f4a8e342b6fb9cd67045e Jan 21 16:02:28 crc kubenswrapper[4834]: I0121 16:02:28.581622 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b7qq4" event={"ID":"57f3e6a2-443f-4851-93a7-835f9fa507ed","Type":"ContainerStarted","Data":"3f66e53763940137b0f4119eb0c1a2e129bef831695f4a8e342b6fb9cd67045e"} Jan 21 16:02:29 crc kubenswrapper[4834]: I0121 16:02:29.608452 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b7qq4" event={"ID":"57f3e6a2-443f-4851-93a7-835f9fa507ed","Type":"ContainerStarted","Data":"9cea04e4444aa554639eed21083f5131f8a01c55c5bdcfc705163dbab58b1206"} Jan 21 16:02:29 crc kubenswrapper[4834]: I0121 16:02:29.637108 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-b7qq4" podStartSLOduration=2.637086547 podStartE2EDuration="2.637086547s" podCreationTimestamp="2026-01-21 16:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:29.62979591 +0000 UTC m=+5495.604144975" watchObservedRunningTime="2026-01-21 16:02:29.637086547 +0000 UTC m=+5495.611435592" Jan 21 16:02:33 crc kubenswrapper[4834]: I0121 16:02:33.666568 4834 generic.go:334] "Generic (PLEG): container finished" podID="57f3e6a2-443f-4851-93a7-835f9fa507ed" containerID="9cea04e4444aa554639eed21083f5131f8a01c55c5bdcfc705163dbab58b1206" exitCode=0 Jan 21 16:02:33 crc kubenswrapper[4834]: I0121 16:02:33.666653 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b7qq4" event={"ID":"57f3e6a2-443f-4851-93a7-835f9fa507ed","Type":"ContainerDied","Data":"9cea04e4444aa554639eed21083f5131f8a01c55c5bdcfc705163dbab58b1206"} Jan 21 16:02:34 crc kubenswrapper[4834]: I0121 16:02:34.991273 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.104619 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-config\") pod \"57f3e6a2-443f-4851-93a7-835f9fa507ed\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.105016 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxqv2\" (UniqueName: \"kubernetes.io/projected/57f3e6a2-443f-4851-93a7-835f9fa507ed-kube-api-access-rxqv2\") pod \"57f3e6a2-443f-4851-93a7-835f9fa507ed\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.105114 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-combined-ca-bundle\") pod \"57f3e6a2-443f-4851-93a7-835f9fa507ed\" (UID: \"57f3e6a2-443f-4851-93a7-835f9fa507ed\") " Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.120259 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f3e6a2-443f-4851-93a7-835f9fa507ed-kube-api-access-rxqv2" (OuterVolumeSpecName: "kube-api-access-rxqv2") pod "57f3e6a2-443f-4851-93a7-835f9fa507ed" (UID: "57f3e6a2-443f-4851-93a7-835f9fa507ed"). InnerVolumeSpecName "kube-api-access-rxqv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.127962 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57f3e6a2-443f-4851-93a7-835f9fa507ed" (UID: "57f3e6a2-443f-4851-93a7-835f9fa507ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.132298 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-config" (OuterVolumeSpecName: "config") pod "57f3e6a2-443f-4851-93a7-835f9fa507ed" (UID: "57f3e6a2-443f-4851-93a7-835f9fa507ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.206609 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.206644 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxqv2\" (UniqueName: \"kubernetes.io/projected/57f3e6a2-443f-4851-93a7-835f9fa507ed-kube-api-access-rxqv2\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.206657 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f3e6a2-443f-4851-93a7-835f9fa507ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.687561 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b7qq4" event={"ID":"57f3e6a2-443f-4851-93a7-835f9fa507ed","Type":"ContainerDied","Data":"3f66e53763940137b0f4119eb0c1a2e129bef831695f4a8e342b6fb9cd67045e"} Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.687607 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f66e53763940137b0f4119eb0c1a2e129bef831695f4a8e342b6fb9cd67045e" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.687965 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b7qq4" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.929326 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bd45554d5-rnmzt"] Jan 21 16:02:35 crc kubenswrapper[4834]: E0121 16:02:35.929738 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f3e6a2-443f-4851-93a7-835f9fa507ed" containerName="neutron-db-sync" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.929755 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f3e6a2-443f-4851-93a7-835f9fa507ed" containerName="neutron-db-sync" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.929947 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f3e6a2-443f-4851-93a7-835f9fa507ed" containerName="neutron-db-sync" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.932906 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:35 crc kubenswrapper[4834]: I0121 16:02:35.940622 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bd45554d5-rnmzt"] Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.021837 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-nb\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.021953 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-config\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.022010 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-dns-svc\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.022046 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-sb\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.022063 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n69d7\" (UniqueName: \"kubernetes.io/projected/acb169ac-8b52-4db7-a063-7d5a45b66151-kube-api-access-n69d7\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.109164 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6dbb78977-9b4b4"] Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.111106 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.116342 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.118152 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4l8qm" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.118481 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.123765 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dbb78977-9b4b4"] Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.124394 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-nb\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.130506 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-config\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.131427 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-dns-svc\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.125490 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-nb\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.131614 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-sb\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.131902 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n69d7\" (UniqueName: \"kubernetes.io/projected/acb169ac-8b52-4db7-a063-7d5a45b66151-kube-api-access-n69d7\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.131276 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-config\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.132584 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-dns-svc\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.144598 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-sb\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.161761 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n69d7\" (UniqueName: \"kubernetes.io/projected/acb169ac-8b52-4db7-a063-7d5a45b66151-kube-api-access-n69d7\") pod \"dnsmasq-dns-bd45554d5-rnmzt\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.233778 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfaf782e-b5bf-4675-ba2d-9d0777b68260-combined-ca-bundle\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.234551 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfaf782e-b5bf-4675-ba2d-9d0777b68260-httpd-config\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.234694 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p72r9\" (UniqueName: \"kubernetes.io/projected/cfaf782e-b5bf-4675-ba2d-9d0777b68260-kube-api-access-p72r9\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.234845 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfaf782e-b5bf-4675-ba2d-9d0777b68260-config\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.256915 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.336505 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfaf782e-b5bf-4675-ba2d-9d0777b68260-combined-ca-bundle\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.336788 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfaf782e-b5bf-4675-ba2d-9d0777b68260-httpd-config\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.336921 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p72r9\" (UniqueName: \"kubernetes.io/projected/cfaf782e-b5bf-4675-ba2d-9d0777b68260-kube-api-access-p72r9\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.337058 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfaf782e-b5bf-4675-ba2d-9d0777b68260-config\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.344009 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfaf782e-b5bf-4675-ba2d-9d0777b68260-combined-ca-bundle\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.346166 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfaf782e-b5bf-4675-ba2d-9d0777b68260-config\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.347520 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfaf782e-b5bf-4675-ba2d-9d0777b68260-httpd-config\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.365895 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p72r9\" (UniqueName: \"kubernetes.io/projected/cfaf782e-b5bf-4675-ba2d-9d0777b68260-kube-api-access-p72r9\") pod \"neutron-6dbb78977-9b4b4\" (UID: \"cfaf782e-b5bf-4675-ba2d-9d0777b68260\") " pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.448554 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:02:36 crc kubenswrapper[4834]: I0121 16:02:36.733764 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bd45554d5-rnmzt"] Jan 21 16:02:37 crc kubenswrapper[4834]: I0121 16:02:37.010808 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6dbb78977-9b4b4"] Jan 21 16:02:37 crc kubenswrapper[4834]: W0121 16:02:37.017745 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfaf782e_b5bf_4675_ba2d_9d0777b68260.slice/crio-88fdf0248ed58710cb7a900d0ba53241c2fedf496ed2e174cf0ecb581665fda4 WatchSource:0}: Error finding container 88fdf0248ed58710cb7a900d0ba53241c2fedf496ed2e174cf0ecb581665fda4: Status 404 returned error can't find the container with id 88fdf0248ed58710cb7a900d0ba53241c2fedf496ed2e174cf0ecb581665fda4 Jan 21 16:02:37 crc kubenswrapper[4834]: I0121 16:02:37.706162 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" event={"ID":"acb169ac-8b52-4db7-a063-7d5a45b66151","Type":"ContainerStarted","Data":"b43b6b41ecf0a207429790a99b02805eebbc226c496c90f607d3e1889e7ef476"} Jan 21 16:02:37 crc kubenswrapper[4834]: I0121 16:02:37.706217 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" event={"ID":"acb169ac-8b52-4db7-a063-7d5a45b66151","Type":"ContainerStarted","Data":"3f1a885ee7bf141ab8e10f8a6cb8c63d20e3f9cd35d77aebeb9b75ddf439cc67"} Jan 21 16:02:37 crc kubenswrapper[4834]: I0121 16:02:37.708452 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dbb78977-9b4b4" event={"ID":"cfaf782e-b5bf-4675-ba2d-9d0777b68260","Type":"ContainerStarted","Data":"88fdf0248ed58710cb7a900d0ba53241c2fedf496ed2e174cf0ecb581665fda4"} Jan 21 16:02:38 crc kubenswrapper[4834]: I0121 16:02:38.729233 4834 generic.go:334] "Generic (PLEG): container finished" podID="acb169ac-8b52-4db7-a063-7d5a45b66151" containerID="b43b6b41ecf0a207429790a99b02805eebbc226c496c90f607d3e1889e7ef476" exitCode=0 Jan 21 16:02:38 crc kubenswrapper[4834]: I0121 16:02:38.729328 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" event={"ID":"acb169ac-8b52-4db7-a063-7d5a45b66151","Type":"ContainerDied","Data":"b43b6b41ecf0a207429790a99b02805eebbc226c496c90f607d3e1889e7ef476"} Jan 21 16:02:38 crc kubenswrapper[4834]: I0121 16:02:38.732559 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dbb78977-9b4b4" event={"ID":"cfaf782e-b5bf-4675-ba2d-9d0777b68260","Type":"ContainerStarted","Data":"288d1ddb2eca11c6918cdd3ef98fc043d9692fb0e54e01f2a267ca103bd51c08"} Jan 21 16:02:39 crc kubenswrapper[4834]: I0121 16:02:39.742508 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6dbb78977-9b4b4" event={"ID":"cfaf782e-b5bf-4675-ba2d-9d0777b68260","Type":"ContainerStarted","Data":"959ef624e8d3db9e3365f7419d277f343fcc9ed1151ca3297ee653e4553009f0"} Jan 21 16:02:40 crc kubenswrapper[4834]: I0121 16:02:40.751467 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" event={"ID":"acb169ac-8b52-4db7-a063-7d5a45b66151","Type":"ContainerStarted","Data":"be519ac11eacfed5627a9ea962b3b3ee6a99ddee0e55f106cf88708ea36ec657"} Jan 21 16:02:40 crc kubenswrapper[4834]: I0121 16:02:40.751819 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:40 crc kubenswrapper[4834]: I0121 16:02:40.775885 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" podStartSLOduration=5.775862898 podStartE2EDuration="5.775862898s" podCreationTimestamp="2026-01-21 16:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:40.773370259 +0000 UTC m=+5506.747719314" watchObservedRunningTime="2026-01-21 16:02:40.775862898 +0000 UTC m=+5506.750211943" Jan 21 16:02:40 crc kubenswrapper[4834]: I0121 16:02:40.799912 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6dbb78977-9b4b4" podStartSLOduration=4.799871796 podStartE2EDuration="4.799871796s" podCreationTimestamp="2026-01-21 16:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:40.792026752 +0000 UTC m=+5506.766375787" watchObservedRunningTime="2026-01-21 16:02:40.799871796 +0000 UTC m=+5506.774220841" Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.258129 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.319563 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5884547677-c496g"] Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.319830 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5884547677-c496g" podUID="3277e37c-86cb-4662-98af-2ffdbd16a064" containerName="dnsmasq-dns" containerID="cri-o://365ca94985ca4696803efaaa0abd2a5c923fedf46aeb1e102ae101f5b41e9ab3" gracePeriod=10 Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.797076 4834 generic.go:334] "Generic (PLEG): container finished" podID="3277e37c-86cb-4662-98af-2ffdbd16a064" containerID="365ca94985ca4696803efaaa0abd2a5c923fedf46aeb1e102ae101f5b41e9ab3" exitCode=0 Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.797117 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5884547677-c496g" event={"ID":"3277e37c-86cb-4662-98af-2ffdbd16a064","Type":"ContainerDied","Data":"365ca94985ca4696803efaaa0abd2a5c923fedf46aeb1e102ae101f5b41e9ab3"} Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.797145 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5884547677-c496g" event={"ID":"3277e37c-86cb-4662-98af-2ffdbd16a064","Type":"ContainerDied","Data":"1e3632173f9e68595a27e6946e8eb0872f2d06466d2a7f0fe408f1fbb80f6954"} Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.797158 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e3632173f9e68595a27e6946e8eb0872f2d06466d2a7f0fe408f1fbb80f6954" Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.855782 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.938923 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-nb\") pod \"3277e37c-86cb-4662-98af-2ffdbd16a064\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.939038 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-config\") pod \"3277e37c-86cb-4662-98af-2ffdbd16a064\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.939103 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-sb\") pod \"3277e37c-86cb-4662-98af-2ffdbd16a064\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.939126 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-dns-svc\") pod \"3277e37c-86cb-4662-98af-2ffdbd16a064\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.939196 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2bmv\" (UniqueName: \"kubernetes.io/projected/3277e37c-86cb-4662-98af-2ffdbd16a064-kube-api-access-s2bmv\") pod \"3277e37c-86cb-4662-98af-2ffdbd16a064\" (UID: \"3277e37c-86cb-4662-98af-2ffdbd16a064\") " Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.953222 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3277e37c-86cb-4662-98af-2ffdbd16a064-kube-api-access-s2bmv" (OuterVolumeSpecName: "kube-api-access-s2bmv") pod "3277e37c-86cb-4662-98af-2ffdbd16a064" (UID: "3277e37c-86cb-4662-98af-2ffdbd16a064"). InnerVolumeSpecName "kube-api-access-s2bmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.983770 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3277e37c-86cb-4662-98af-2ffdbd16a064" (UID: "3277e37c-86cb-4662-98af-2ffdbd16a064"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.983815 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3277e37c-86cb-4662-98af-2ffdbd16a064" (UID: "3277e37c-86cb-4662-98af-2ffdbd16a064"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.985228 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-config" (OuterVolumeSpecName: "config") pod "3277e37c-86cb-4662-98af-2ffdbd16a064" (UID: "3277e37c-86cb-4662-98af-2ffdbd16a064"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:46 crc kubenswrapper[4834]: I0121 16:02:46.987264 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3277e37c-86cb-4662-98af-2ffdbd16a064" (UID: "3277e37c-86cb-4662-98af-2ffdbd16a064"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.041435 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.041471 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.041485 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2bmv\" (UniqueName: \"kubernetes.io/projected/3277e37c-86cb-4662-98af-2ffdbd16a064-kube-api-access-s2bmv\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.041494 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.041506 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3277e37c-86cb-4662-98af-2ffdbd16a064-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.114321 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.114399 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.114450 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.115244 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3bc39a16cddf51fe4af8641c77c90c2682ec58f862bb19d7774d650115f85e6"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.115315 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://d3bc39a16cddf51fe4af8641c77c90c2682ec58f862bb19d7774d650115f85e6" gracePeriod=600 Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.806555 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="d3bc39a16cddf51fe4af8641c77c90c2682ec58f862bb19d7774d650115f85e6" exitCode=0 Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.807204 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5884547677-c496g" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.816967 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"d3bc39a16cddf51fe4af8641c77c90c2682ec58f862bb19d7774d650115f85e6"} Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.817157 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd"} Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.817189 4834 scope.go:117] "RemoveContainer" containerID="7badcb67fc0735bebd51489d9e73d406632d790896a2f2e6da5c68eef289fa5c" Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.865939 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5884547677-c496g"] Jan 21 16:02:47 crc kubenswrapper[4834]: I0121 16:02:47.891518 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5884547677-c496g"] Jan 21 16:02:48 crc kubenswrapper[4834]: I0121 16:02:48.336710 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3277e37c-86cb-4662-98af-2ffdbd16a064" path="/var/lib/kubelet/pods/3277e37c-86cb-4662-98af-2ffdbd16a064/volumes" Jan 21 16:03:06 crc kubenswrapper[4834]: I0121 16:03:06.449690 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:03:06 crc kubenswrapper[4834]: I0121 16:03:06.463449 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6dbb78977-9b4b4" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.875403 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-695hd"] Jan 21 16:03:13 crc kubenswrapper[4834]: E0121 16:03:13.876309 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3277e37c-86cb-4662-98af-2ffdbd16a064" containerName="dnsmasq-dns" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.876324 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3277e37c-86cb-4662-98af-2ffdbd16a064" containerName="dnsmasq-dns" Jan 21 16:03:13 crc kubenswrapper[4834]: E0121 16:03:13.876345 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3277e37c-86cb-4662-98af-2ffdbd16a064" containerName="init" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.876350 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3277e37c-86cb-4662-98af-2ffdbd16a064" containerName="init" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.876522 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3277e37c-86cb-4662-98af-2ffdbd16a064" containerName="dnsmasq-dns" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.877084 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-695hd" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.883472 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-695hd"] Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.958443 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-operator-scripts\") pod \"glance-db-create-695hd\" (UID: \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\") " pod="openstack/glance-db-create-695hd" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.958840 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffnq8\" (UniqueName: \"kubernetes.io/projected/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-kube-api-access-ffnq8\") pod \"glance-db-create-695hd\" (UID: \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\") " pod="openstack/glance-db-create-695hd" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.968228 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ed6b-account-create-update-vbnjd"] Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.969751 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.973754 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 16:03:13 crc kubenswrapper[4834]: I0121 16:03:13.976912 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ed6b-account-create-update-vbnjd"] Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.061313 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-operator-scripts\") pod \"glance-db-create-695hd\" (UID: \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\") " pod="openstack/glance-db-create-695hd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.061419 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b70e61df-6106-48d9-a3ee-25636d71694e-operator-scripts\") pod \"glance-ed6b-account-create-update-vbnjd\" (UID: \"b70e61df-6106-48d9-a3ee-25636d71694e\") " pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.061516 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffnq8\" (UniqueName: \"kubernetes.io/projected/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-kube-api-access-ffnq8\") pod \"glance-db-create-695hd\" (UID: \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\") " pod="openstack/glance-db-create-695hd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.061655 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghxs\" (UniqueName: \"kubernetes.io/projected/b70e61df-6106-48d9-a3ee-25636d71694e-kube-api-access-bghxs\") pod \"glance-ed6b-account-create-update-vbnjd\" (UID: \"b70e61df-6106-48d9-a3ee-25636d71694e\") " pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.062537 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-operator-scripts\") pod \"glance-db-create-695hd\" (UID: \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\") " pod="openstack/glance-db-create-695hd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.080199 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffnq8\" (UniqueName: \"kubernetes.io/projected/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-kube-api-access-ffnq8\") pod \"glance-db-create-695hd\" (UID: \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\") " pod="openstack/glance-db-create-695hd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.163193 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b70e61df-6106-48d9-a3ee-25636d71694e-operator-scripts\") pod \"glance-ed6b-account-create-update-vbnjd\" (UID: \"b70e61df-6106-48d9-a3ee-25636d71694e\") " pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.163295 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bghxs\" (UniqueName: \"kubernetes.io/projected/b70e61df-6106-48d9-a3ee-25636d71694e-kube-api-access-bghxs\") pod \"glance-ed6b-account-create-update-vbnjd\" (UID: \"b70e61df-6106-48d9-a3ee-25636d71694e\") " pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.164337 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b70e61df-6106-48d9-a3ee-25636d71694e-operator-scripts\") pod \"glance-ed6b-account-create-update-vbnjd\" (UID: \"b70e61df-6106-48d9-a3ee-25636d71694e\") " pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.181135 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bghxs\" (UniqueName: \"kubernetes.io/projected/b70e61df-6106-48d9-a3ee-25636d71694e-kube-api-access-bghxs\") pod \"glance-ed6b-account-create-update-vbnjd\" (UID: \"b70e61df-6106-48d9-a3ee-25636d71694e\") " pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.199413 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-695hd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.286419 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:14 crc kubenswrapper[4834]: I0121 16:03:14.982085 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ed6b-account-create-update-vbnjd"] Jan 21 16:03:15 crc kubenswrapper[4834]: I0121 16:03:15.016786 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed6b-account-create-update-vbnjd" event={"ID":"b70e61df-6106-48d9-a3ee-25636d71694e","Type":"ContainerStarted","Data":"f4e7f36eccc1b91f605a4614ab4f8bbff8c2e7d3afe846d0c9726352a4e55b31"} Jan 21 16:03:15 crc kubenswrapper[4834]: I0121 16:03:15.053375 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-695hd"] Jan 21 16:03:15 crc kubenswrapper[4834]: W0121 16:03:15.053692 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e4edd20_e9b0_4004_a8d4_fd17cf05c32c.slice/crio-4c8656d4bc7b31db9963e4d58277e2461102b0d1a73f7c1a590e8fc24c9cd006 WatchSource:0}: Error finding container 4c8656d4bc7b31db9963e4d58277e2461102b0d1a73f7c1a590e8fc24c9cd006: Status 404 returned error can't find the container with id 4c8656d4bc7b31db9963e4d58277e2461102b0d1a73f7c1a590e8fc24c9cd006 Jan 21 16:03:16 crc kubenswrapper[4834]: I0121 16:03:16.025211 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-695hd" event={"ID":"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c","Type":"ContainerStarted","Data":"d815e341d3fb85bdd25828b0f1850fdb55b0f2f0d389b347afbaf01c0c338145"} Jan 21 16:03:16 crc kubenswrapper[4834]: I0121 16:03:16.025636 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-695hd" event={"ID":"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c","Type":"ContainerStarted","Data":"4c8656d4bc7b31db9963e4d58277e2461102b0d1a73f7c1a590e8fc24c9cd006"} Jan 21 16:03:16 crc kubenswrapper[4834]: I0121 16:03:16.028659 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed6b-account-create-update-vbnjd" event={"ID":"b70e61df-6106-48d9-a3ee-25636d71694e","Type":"ContainerStarted","Data":"7852aedc75177045582c3a4334fd68855c336b593e7b433557af98e331f02aea"} Jan 21 16:03:16 crc kubenswrapper[4834]: I0121 16:03:16.047293 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-695hd" podStartSLOduration=3.047269597 podStartE2EDuration="3.047269597s" podCreationTimestamp="2026-01-21 16:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:16.041631781 +0000 UTC m=+5542.015980826" watchObservedRunningTime="2026-01-21 16:03:16.047269597 +0000 UTC m=+5542.021618642" Jan 21 16:03:16 crc kubenswrapper[4834]: I0121 16:03:16.056816 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ed6b-account-create-update-vbnjd" podStartSLOduration=3.056790083 podStartE2EDuration="3.056790083s" podCreationTimestamp="2026-01-21 16:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:16.056466113 +0000 UTC m=+5542.030815158" watchObservedRunningTime="2026-01-21 16:03:16.056790083 +0000 UTC m=+5542.031139138" Jan 21 16:03:17 crc kubenswrapper[4834]: I0121 16:03:17.039098 4834 generic.go:334] "Generic (PLEG): container finished" podID="b70e61df-6106-48d9-a3ee-25636d71694e" containerID="7852aedc75177045582c3a4334fd68855c336b593e7b433557af98e331f02aea" exitCode=0 Jan 21 16:03:17 crc kubenswrapper[4834]: I0121 16:03:17.039201 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed6b-account-create-update-vbnjd" event={"ID":"b70e61df-6106-48d9-a3ee-25636d71694e","Type":"ContainerDied","Data":"7852aedc75177045582c3a4334fd68855c336b593e7b433557af98e331f02aea"} Jan 21 16:03:17 crc kubenswrapper[4834]: I0121 16:03:17.041444 4834 generic.go:334] "Generic (PLEG): container finished" podID="7e4edd20-e9b0-4004-a8d4-fd17cf05c32c" containerID="d815e341d3fb85bdd25828b0f1850fdb55b0f2f0d389b347afbaf01c0c338145" exitCode=0 Jan 21 16:03:17 crc kubenswrapper[4834]: I0121 16:03:17.041477 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-695hd" event={"ID":"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c","Type":"ContainerDied","Data":"d815e341d3fb85bdd25828b0f1850fdb55b0f2f0d389b347afbaf01c0c338145"} Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.428822 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-695hd" Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.438902 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.549625 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b70e61df-6106-48d9-a3ee-25636d71694e-operator-scripts\") pod \"b70e61df-6106-48d9-a3ee-25636d71694e\" (UID: \"b70e61df-6106-48d9-a3ee-25636d71694e\") " Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.550591 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bghxs\" (UniqueName: \"kubernetes.io/projected/b70e61df-6106-48d9-a3ee-25636d71694e-kube-api-access-bghxs\") pod \"b70e61df-6106-48d9-a3ee-25636d71694e\" (UID: \"b70e61df-6106-48d9-a3ee-25636d71694e\") " Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.550682 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-operator-scripts\") pod \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\" (UID: \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\") " Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.550721 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffnq8\" (UniqueName: \"kubernetes.io/projected/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-kube-api-access-ffnq8\") pod \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\" (UID: \"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c\") " Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.550714 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b70e61df-6106-48d9-a3ee-25636d71694e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b70e61df-6106-48d9-a3ee-25636d71694e" (UID: "b70e61df-6106-48d9-a3ee-25636d71694e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.551502 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e4edd20-e9b0-4004-a8d4-fd17cf05c32c" (UID: "7e4edd20-e9b0-4004-a8d4-fd17cf05c32c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.551507 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b70e61df-6106-48d9-a3ee-25636d71694e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.557032 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70e61df-6106-48d9-a3ee-25636d71694e-kube-api-access-bghxs" (OuterVolumeSpecName: "kube-api-access-bghxs") pod "b70e61df-6106-48d9-a3ee-25636d71694e" (UID: "b70e61df-6106-48d9-a3ee-25636d71694e"). InnerVolumeSpecName "kube-api-access-bghxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.558285 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-kube-api-access-ffnq8" (OuterVolumeSpecName: "kube-api-access-ffnq8") pod "7e4edd20-e9b0-4004-a8d4-fd17cf05c32c" (UID: "7e4edd20-e9b0-4004-a8d4-fd17cf05c32c"). InnerVolumeSpecName "kube-api-access-ffnq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.653541 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bghxs\" (UniqueName: \"kubernetes.io/projected/b70e61df-6106-48d9-a3ee-25636d71694e-kube-api-access-bghxs\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.653587 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:18 crc kubenswrapper[4834]: I0121 16:03:18.653600 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffnq8\" (UniqueName: \"kubernetes.io/projected/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c-kube-api-access-ffnq8\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:19 crc kubenswrapper[4834]: I0121 16:03:19.057938 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-695hd" Jan 21 16:03:19 crc kubenswrapper[4834]: I0121 16:03:19.057963 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-695hd" event={"ID":"7e4edd20-e9b0-4004-a8d4-fd17cf05c32c","Type":"ContainerDied","Data":"4c8656d4bc7b31db9963e4d58277e2461102b0d1a73f7c1a590e8fc24c9cd006"} Jan 21 16:03:19 crc kubenswrapper[4834]: I0121 16:03:19.058420 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c8656d4bc7b31db9963e4d58277e2461102b0d1a73f7c1a590e8fc24c9cd006" Jan 21 16:03:19 crc kubenswrapper[4834]: I0121 16:03:19.060197 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ed6b-account-create-update-vbnjd" event={"ID":"b70e61df-6106-48d9-a3ee-25636d71694e","Type":"ContainerDied","Data":"f4e7f36eccc1b91f605a4614ab4f8bbff8c2e7d3afe846d0c9726352a4e55b31"} Jan 21 16:03:19 crc kubenswrapper[4834]: I0121 16:03:19.060259 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4e7f36eccc1b91f605a4614ab4f8bbff8c2e7d3afe846d0c9726352a4e55b31" Jan 21 16:03:19 crc kubenswrapper[4834]: I0121 16:03:19.060286 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ed6b-account-create-update-vbnjd" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.108683 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-slrrx"] Jan 21 16:03:24 crc kubenswrapper[4834]: E0121 16:03:24.109549 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70e61df-6106-48d9-a3ee-25636d71694e" containerName="mariadb-account-create-update" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.109562 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70e61df-6106-48d9-a3ee-25636d71694e" containerName="mariadb-account-create-update" Jan 21 16:03:24 crc kubenswrapper[4834]: E0121 16:03:24.109589 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4edd20-e9b0-4004-a8d4-fd17cf05c32c" containerName="mariadb-database-create" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.109595 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4edd20-e9b0-4004-a8d4-fd17cf05c32c" containerName="mariadb-database-create" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.109739 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4edd20-e9b0-4004-a8d4-fd17cf05c32c" containerName="mariadb-database-create" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.109757 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70e61df-6106-48d9-a3ee-25636d71694e" containerName="mariadb-account-create-update" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.110331 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.112522 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6zg6h" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.112777 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.119243 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-slrrx"] Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.246573 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-config-data\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.246619 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-combined-ca-bundle\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.246811 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtknx\" (UniqueName: \"kubernetes.io/projected/924f5e33-da1f-4f40-a335-a634fe4fc218-kube-api-access-rtknx\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.246850 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-db-sync-config-data\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.348362 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-db-sync-config-data\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.349396 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtknx\" (UniqueName: \"kubernetes.io/projected/924f5e33-da1f-4f40-a335-a634fe4fc218-kube-api-access-rtknx\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.349492 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-config-data\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.349525 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-combined-ca-bundle\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.355122 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-combined-ca-bundle\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.356987 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-db-sync-config-data\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.367582 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-config-data\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.378772 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtknx\" (UniqueName: \"kubernetes.io/projected/924f5e33-da1f-4f40-a335-a634fe4fc218-kube-api-access-rtknx\") pod \"glance-db-sync-slrrx\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.427122 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:24 crc kubenswrapper[4834]: I0121 16:03:24.949128 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-slrrx"] Jan 21 16:03:25 crc kubenswrapper[4834]: I0121 16:03:25.108655 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-slrrx" event={"ID":"924f5e33-da1f-4f40-a335-a634fe4fc218","Type":"ContainerStarted","Data":"e57a26ec67f3e8365cd47a74b30bca040aff5c7fe43c3a5a8eee9c800e487617"} Jan 21 16:03:26 crc kubenswrapper[4834]: I0121 16:03:26.115566 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-slrrx" event={"ID":"924f5e33-da1f-4f40-a335-a634fe4fc218","Type":"ContainerStarted","Data":"855a35d92c641738c1bab78dd7f32c70f4bea2b258e47ed98d7fcda4c377b87e"} Jan 21 16:03:26 crc kubenswrapper[4834]: I0121 16:03:26.134151 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-slrrx" podStartSLOduration=2.13413466 podStartE2EDuration="2.13413466s" podCreationTimestamp="2026-01-21 16:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:26.130868098 +0000 UTC m=+5552.105217163" watchObservedRunningTime="2026-01-21 16:03:26.13413466 +0000 UTC m=+5552.108483705" Jan 21 16:03:29 crc kubenswrapper[4834]: I0121 16:03:29.142082 4834 generic.go:334] "Generic (PLEG): container finished" podID="924f5e33-da1f-4f40-a335-a634fe4fc218" containerID="855a35d92c641738c1bab78dd7f32c70f4bea2b258e47ed98d7fcda4c377b87e" exitCode=0 Jan 21 16:03:29 crc kubenswrapper[4834]: I0121 16:03:29.142197 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-slrrx" event={"ID":"924f5e33-da1f-4f40-a335-a634fe4fc218","Type":"ContainerDied","Data":"855a35d92c641738c1bab78dd7f32c70f4bea2b258e47ed98d7fcda4c377b87e"} Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.569140 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.654937 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-combined-ca-bundle\") pod \"924f5e33-da1f-4f40-a335-a634fe4fc218\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.655293 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-db-sync-config-data\") pod \"924f5e33-da1f-4f40-a335-a634fe4fc218\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.655508 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-config-data\") pod \"924f5e33-da1f-4f40-a335-a634fe4fc218\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.655809 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtknx\" (UniqueName: \"kubernetes.io/projected/924f5e33-da1f-4f40-a335-a634fe4fc218-kube-api-access-rtknx\") pod \"924f5e33-da1f-4f40-a335-a634fe4fc218\" (UID: \"924f5e33-da1f-4f40-a335-a634fe4fc218\") " Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.665391 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924f5e33-da1f-4f40-a335-a634fe4fc218-kube-api-access-rtknx" (OuterVolumeSpecName: "kube-api-access-rtknx") pod "924f5e33-da1f-4f40-a335-a634fe4fc218" (UID: "924f5e33-da1f-4f40-a335-a634fe4fc218"). InnerVolumeSpecName "kube-api-access-rtknx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.665553 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "924f5e33-da1f-4f40-a335-a634fe4fc218" (UID: "924f5e33-da1f-4f40-a335-a634fe4fc218"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.687056 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "924f5e33-da1f-4f40-a335-a634fe4fc218" (UID: "924f5e33-da1f-4f40-a335-a634fe4fc218"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.723799 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-config-data" (OuterVolumeSpecName: "config-data") pod "924f5e33-da1f-4f40-a335-a634fe4fc218" (UID: "924f5e33-da1f-4f40-a335-a634fe4fc218"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.757904 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.757962 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtknx\" (UniqueName: \"kubernetes.io/projected/924f5e33-da1f-4f40-a335-a634fe4fc218-kube-api-access-rtknx\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.757977 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:30 crc kubenswrapper[4834]: I0121 16:03:30.757988 4834 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/924f5e33-da1f-4f40-a335-a634fe4fc218-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.162582 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-slrrx" event={"ID":"924f5e33-da1f-4f40-a335-a634fe4fc218","Type":"ContainerDied","Data":"e57a26ec67f3e8365cd47a74b30bca040aff5c7fe43c3a5a8eee9c800e487617"} Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.162956 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e57a26ec67f3e8365cd47a74b30bca040aff5c7fe43c3a5a8eee9c800e487617" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.162678 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-slrrx" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.466736 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:03:31 crc kubenswrapper[4834]: E0121 16:03:31.467260 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924f5e33-da1f-4f40-a335-a634fe4fc218" containerName="glance-db-sync" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.467283 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="924f5e33-da1f-4f40-a335-a634fe4fc218" containerName="glance-db-sync" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.467530 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="924f5e33-da1f-4f40-a335-a634fe4fc218" containerName="glance-db-sync" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.468822 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.473496 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.474644 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6zg6h" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.475371 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.475577 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.497868 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.569395 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.569462 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv288\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-kube-api-access-wv288\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.569502 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.569525 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-ceph\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.569540 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.569559 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-logs\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.569663 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.615845 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74648d7f5-f6c4t"] Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.618465 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.627914 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74648d7f5-f6c4t"] Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671505 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671578 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg5gv\" (UniqueName: \"kubernetes.io/projected/c3267acd-477e-4723-af63-957f157352df-kube-api-access-mg5gv\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671649 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-sb\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671684 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-dns-svc\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671741 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671803 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv288\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-kube-api-access-wv288\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671842 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-nb\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671914 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671969 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-ceph\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.671998 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.672028 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-logs\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.672051 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-config\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.672595 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.674316 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-logs\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.679015 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-ceph\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.679464 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.679725 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.680297 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.700607 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv288\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-kube-api-access-wv288\") pod \"glance-default-external-api-0\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.715664 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.717999 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.723542 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.762339 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.773800 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.773908 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.773980 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-nb\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.774015 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-config-data\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.774042 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsc5m\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-kube-api-access-xsc5m\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.774073 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-config\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.774116 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg5gv\" (UniqueName: \"kubernetes.io/projected/c3267acd-477e-4723-af63-957f157352df-kube-api-access-mg5gv\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.774173 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-sb\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.774197 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-dns-svc\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.774224 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-ceph\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.774263 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-logs\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.774288 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-scripts\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.775475 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-config\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.776721 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-sb\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.776966 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-dns-svc\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.779795 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-nb\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.787350 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.796539 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg5gv\" (UniqueName: \"kubernetes.io/projected/c3267acd-477e-4723-af63-957f157352df-kube-api-access-mg5gv\") pod \"dnsmasq-dns-74648d7f5-f6c4t\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.876523 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-ceph\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.877006 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-logs\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.877039 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-scripts\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.877060 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.877105 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.877736 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-config-data\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.877830 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsc5m\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-kube-api-access-xsc5m\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.879275 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.879275 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-logs\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.883642 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-ceph\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.885285 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-config-data\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.889356 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-scripts\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.890544 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.899635 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsc5m\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-kube-api-access-xsc5m\") pod \"glance-default-internal-api-0\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:31 crc kubenswrapper[4834]: I0121 16:03:31.936830 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:32 crc kubenswrapper[4834]: I0121 16:03:32.072066 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:32 crc kubenswrapper[4834]: I0121 16:03:32.259649 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74648d7f5-f6c4t"] Jan 21 16:03:32 crc kubenswrapper[4834]: I0121 16:03:32.468442 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:03:32 crc kubenswrapper[4834]: I0121 16:03:32.735878 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:03:32 crc kubenswrapper[4834]: I0121 16:03:32.937704 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:03:33 crc kubenswrapper[4834]: I0121 16:03:33.214026 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"55530fa6-ce05-4faa-b262-8b30a13b1542","Type":"ContainerStarted","Data":"a0d0fff1a8bc38780f008f20511ce026bab1d2c7039cc76e04c9772ddac1c3cf"} Jan 21 16:03:33 crc kubenswrapper[4834]: I0121 16:03:33.236694 4834 generic.go:334] "Generic (PLEG): container finished" podID="c3267acd-477e-4723-af63-957f157352df" containerID="43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15" exitCode=0 Jan 21 16:03:33 crc kubenswrapper[4834]: I0121 16:03:33.238360 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" event={"ID":"c3267acd-477e-4723-af63-957f157352df","Type":"ContainerDied","Data":"43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15"} Jan 21 16:03:33 crc kubenswrapper[4834]: I0121 16:03:33.238438 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" event={"ID":"c3267acd-477e-4723-af63-957f157352df","Type":"ContainerStarted","Data":"5d8a2fcb7b3f1f082ae5df61aa150565285b1723e58fd85de57285bdb11b683d"} Jan 21 16:03:33 crc kubenswrapper[4834]: I0121 16:03:33.247214 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516a992d-ea53-4b4e-b587-bf3745a4aa9f","Type":"ContainerStarted","Data":"59c0fba23295ce33825c4bb23eec941c8a07d9ff1c7d8bbdae8af19d623887d6"} Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.266580 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" event={"ID":"c3267acd-477e-4723-af63-957f157352df","Type":"ContainerStarted","Data":"26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb"} Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.266918 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.268524 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516a992d-ea53-4b4e-b587-bf3745a4aa9f","Type":"ContainerStarted","Data":"37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431"} Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.268568 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516a992d-ea53-4b4e-b587-bf3745a4aa9f","Type":"ContainerStarted","Data":"1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9"} Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.268607 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerName="glance-log" containerID="cri-o://1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9" gracePeriod=30 Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.268645 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerName="glance-httpd" containerID="cri-o://37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431" gracePeriod=30 Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.270979 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"55530fa6-ce05-4faa-b262-8b30a13b1542","Type":"ContainerStarted","Data":"059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103"} Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.271034 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"55530fa6-ce05-4faa-b262-8b30a13b1542","Type":"ContainerStarted","Data":"d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319"} Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.326981 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.326914316 podStartE2EDuration="3.326914316s" podCreationTimestamp="2026-01-21 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:34.311514656 +0000 UTC m=+5560.285863711" watchObservedRunningTime="2026-01-21 16:03:34.326914316 +0000 UTC m=+5560.301263361" Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.328212 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" podStartSLOduration=3.328201686 podStartE2EDuration="3.328201686s" podCreationTimestamp="2026-01-21 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:34.29021605 +0000 UTC m=+5560.264565115" watchObservedRunningTime="2026-01-21 16:03:34.328201686 +0000 UTC m=+5560.302550731" Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.346179 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.346150276 podStartE2EDuration="3.346150276s" podCreationTimestamp="2026-01-21 16:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:34.341439489 +0000 UTC m=+5560.315788534" watchObservedRunningTime="2026-01-21 16:03:34.346150276 +0000 UTC m=+5560.320499321" Jan 21 16:03:34 crc kubenswrapper[4834]: I0121 16:03:34.930480 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.072206 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-combined-ca-bundle\") pod \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.072277 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-ceph\") pod \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.072348 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-logs\") pod \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.072375 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv288\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-kube-api-access-wv288\") pod \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.072416 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-httpd-run\") pod \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.072468 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-scripts\") pod \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.072492 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-config-data\") pod \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\" (UID: \"516a992d-ea53-4b4e-b587-bf3745a4aa9f\") " Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.072869 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-logs" (OuterVolumeSpecName: "logs") pod "516a992d-ea53-4b4e-b587-bf3745a4aa9f" (UID: "516a992d-ea53-4b4e-b587-bf3745a4aa9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.072983 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "516a992d-ea53-4b4e-b587-bf3745a4aa9f" (UID: "516a992d-ea53-4b4e-b587-bf3745a4aa9f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.073078 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.077909 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-ceph" (OuterVolumeSpecName: "ceph") pod "516a992d-ea53-4b4e-b587-bf3745a4aa9f" (UID: "516a992d-ea53-4b4e-b587-bf3745a4aa9f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.078988 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-kube-api-access-wv288" (OuterVolumeSpecName: "kube-api-access-wv288") pod "516a992d-ea53-4b4e-b587-bf3745a4aa9f" (UID: "516a992d-ea53-4b4e-b587-bf3745a4aa9f"). InnerVolumeSpecName "kube-api-access-wv288". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.092132 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-scripts" (OuterVolumeSpecName: "scripts") pod "516a992d-ea53-4b4e-b587-bf3745a4aa9f" (UID: "516a992d-ea53-4b4e-b587-bf3745a4aa9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.112859 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "516a992d-ea53-4b4e-b587-bf3745a4aa9f" (UID: "516a992d-ea53-4b4e-b587-bf3745a4aa9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.143913 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-config-data" (OuterVolumeSpecName: "config-data") pod "516a992d-ea53-4b4e-b587-bf3745a4aa9f" (UID: "516a992d-ea53-4b4e-b587-bf3745a4aa9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.175087 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516a992d-ea53-4b4e-b587-bf3745a4aa9f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.175125 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.175134 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.175142 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516a992d-ea53-4b4e-b587-bf3745a4aa9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.175153 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.175162 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv288\" (UniqueName: \"kubernetes.io/projected/516a992d-ea53-4b4e-b587-bf3745a4aa9f-kube-api-access-wv288\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.281221 4834 generic.go:334] "Generic (PLEG): container finished" podID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerID="37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431" exitCode=0 Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.281263 4834 generic.go:334] "Generic (PLEG): container finished" podID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerID="1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9" exitCode=143 Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.281286 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.281352 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516a992d-ea53-4b4e-b587-bf3745a4aa9f","Type":"ContainerDied","Data":"37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431"} Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.281381 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516a992d-ea53-4b4e-b587-bf3745a4aa9f","Type":"ContainerDied","Data":"1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9"} Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.281391 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"516a992d-ea53-4b4e-b587-bf3745a4aa9f","Type":"ContainerDied","Data":"59c0fba23295ce33825c4bb23eec941c8a07d9ff1c7d8bbdae8af19d623887d6"} Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.281408 4834 scope.go:117] "RemoveContainer" containerID="37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.308450 4834 scope.go:117] "RemoveContainer" containerID="1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.319174 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.330199 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.341276 4834 scope.go:117] "RemoveContainer" containerID="37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431" Jan 21 16:03:35 crc kubenswrapper[4834]: E0121 16:03:35.341884 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431\": container with ID starting with 37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431 not found: ID does not exist" containerID="37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.341916 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431"} err="failed to get container status \"37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431\": rpc error: code = NotFound desc = could not find container \"37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431\": container with ID starting with 37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431 not found: ID does not exist" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.341957 4834 scope.go:117] "RemoveContainer" containerID="1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9" Jan 21 16:03:35 crc kubenswrapper[4834]: E0121 16:03:35.342434 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9\": container with ID starting with 1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9 not found: ID does not exist" containerID="1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.342462 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9"} err="failed to get container status \"1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9\": rpc error: code = NotFound desc = could not find container \"1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9\": container with ID starting with 1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9 not found: ID does not exist" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.342478 4834 scope.go:117] "RemoveContainer" containerID="37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.342736 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431"} err="failed to get container status \"37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431\": rpc error: code = NotFound desc = could not find container \"37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431\": container with ID starting with 37a2e1854697326a62cba56556a3a2918dc39d6b3d9da5a9a695242154118431 not found: ID does not exist" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.342756 4834 scope.go:117] "RemoveContainer" containerID="1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.343048 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9"} err="failed to get container status \"1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9\": rpc error: code = NotFound desc = could not find container \"1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9\": container with ID starting with 1a2e1340c799db92f660f5035573a57b04f90dfbea75cd8f56aed9189c0955f9 not found: ID does not exist" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.350491 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:03:35 crc kubenswrapper[4834]: E0121 16:03:35.350892 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerName="glance-log" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.350910 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerName="glance-log" Jan 21 16:03:35 crc kubenswrapper[4834]: E0121 16:03:35.350950 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerName="glance-httpd" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.350956 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerName="glance-httpd" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.351116 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerName="glance-log" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.351130 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" containerName="glance-httpd" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.351999 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.356769 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.366779 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.484401 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.484764 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.484831 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-ceph\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.484867 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.485026 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.485143 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp9s4\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-kube-api-access-rp9s4\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.485186 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-logs\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.556446 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.587027 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.587101 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.587149 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-ceph\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.587186 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.587229 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.587269 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp9s4\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-kube-api-access-rp9s4\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.587292 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-logs\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.587665 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.587693 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-logs\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.591991 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-ceph\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.592457 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.592527 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.592586 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.609572 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp9s4\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-kube-api-access-rp9s4\") pod \"glance-default-external-api-0\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " pod="openstack/glance-default-external-api-0" Jan 21 16:03:35 crc kubenswrapper[4834]: I0121 16:03:35.669314 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:03:36 crc kubenswrapper[4834]: I0121 16:03:36.237224 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:03:36 crc kubenswrapper[4834]: I0121 16:03:36.298341 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31bd15e2-d0e6-481c-af91-7959a5ea55b5","Type":"ContainerStarted","Data":"e6abacc9f26bbb09f98f6a7d77fddfc240ab39477ab6190542bd7b4f065c9bbd"} Jan 21 16:03:36 crc kubenswrapper[4834]: I0121 16:03:36.298518 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerName="glance-log" containerID="cri-o://d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319" gracePeriod=30 Jan 21 16:03:36 crc kubenswrapper[4834]: I0121 16:03:36.298626 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerName="glance-httpd" containerID="cri-o://059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103" gracePeriod=30 Jan 21 16:03:36 crc kubenswrapper[4834]: I0121 16:03:36.340827 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="516a992d-ea53-4b4e-b587-bf3745a4aa9f" path="/var/lib/kubelet/pods/516a992d-ea53-4b4e-b587-bf3745a4aa9f/volumes" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.129247 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.217164 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-httpd-run\") pod \"55530fa6-ce05-4faa-b262-8b30a13b1542\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.217984 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "55530fa6-ce05-4faa-b262-8b30a13b1542" (UID: "55530fa6-ce05-4faa-b262-8b30a13b1542"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.218148 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-ceph\") pod \"55530fa6-ce05-4faa-b262-8b30a13b1542\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.218187 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-combined-ca-bundle\") pod \"55530fa6-ce05-4faa-b262-8b30a13b1542\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.218847 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-logs\") pod \"55530fa6-ce05-4faa-b262-8b30a13b1542\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.218881 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-scripts\") pod \"55530fa6-ce05-4faa-b262-8b30a13b1542\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.218962 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsc5m\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-kube-api-access-xsc5m\") pod \"55530fa6-ce05-4faa-b262-8b30a13b1542\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.219015 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-config-data\") pod \"55530fa6-ce05-4faa-b262-8b30a13b1542\" (UID: \"55530fa6-ce05-4faa-b262-8b30a13b1542\") " Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.219430 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-logs" (OuterVolumeSpecName: "logs") pod "55530fa6-ce05-4faa-b262-8b30a13b1542" (UID: "55530fa6-ce05-4faa-b262-8b30a13b1542"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.219835 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.219857 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55530fa6-ce05-4faa-b262-8b30a13b1542-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.223902 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-ceph" (OuterVolumeSpecName: "ceph") pod "55530fa6-ce05-4faa-b262-8b30a13b1542" (UID: "55530fa6-ce05-4faa-b262-8b30a13b1542"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.225154 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-kube-api-access-xsc5m" (OuterVolumeSpecName: "kube-api-access-xsc5m") pod "55530fa6-ce05-4faa-b262-8b30a13b1542" (UID: "55530fa6-ce05-4faa-b262-8b30a13b1542"). InnerVolumeSpecName "kube-api-access-xsc5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.226972 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-scripts" (OuterVolumeSpecName: "scripts") pod "55530fa6-ce05-4faa-b262-8b30a13b1542" (UID: "55530fa6-ce05-4faa-b262-8b30a13b1542"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.249124 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55530fa6-ce05-4faa-b262-8b30a13b1542" (UID: "55530fa6-ce05-4faa-b262-8b30a13b1542"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.286284 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-config-data" (OuterVolumeSpecName: "config-data") pod "55530fa6-ce05-4faa-b262-8b30a13b1542" (UID: "55530fa6-ce05-4faa-b262-8b30a13b1542"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.322011 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.322046 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.322058 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.322087 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsc5m\" (UniqueName: \"kubernetes.io/projected/55530fa6-ce05-4faa-b262-8b30a13b1542-kube-api-access-xsc5m\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.322101 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55530fa6-ce05-4faa-b262-8b30a13b1542-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.330357 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31bd15e2-d0e6-481c-af91-7959a5ea55b5","Type":"ContainerStarted","Data":"d886106ede6b5724900dfa95fce683b870c2ef13944ad83a7b7c53ba355edeff"} Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.335955 4834 generic.go:334] "Generic (PLEG): container finished" podID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerID="059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103" exitCode=0 Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.335992 4834 generic.go:334] "Generic (PLEG): container finished" podID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerID="d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319" exitCode=143 Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.336014 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"55530fa6-ce05-4faa-b262-8b30a13b1542","Type":"ContainerDied","Data":"059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103"} Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.336045 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"55530fa6-ce05-4faa-b262-8b30a13b1542","Type":"ContainerDied","Data":"d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319"} Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.336047 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.336075 4834 scope.go:117] "RemoveContainer" containerID="059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.336057 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"55530fa6-ce05-4faa-b262-8b30a13b1542","Type":"ContainerDied","Data":"a0d0fff1a8bc38780f008f20511ce026bab1d2c7039cc76e04c9772ddac1c3cf"} Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.373735 4834 scope.go:117] "RemoveContainer" containerID="d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.384979 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.404449 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.413127 4834 scope.go:117] "RemoveContainer" containerID="059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103" Jan 21 16:03:37 crc kubenswrapper[4834]: E0121 16:03:37.414185 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103\": container with ID starting with 059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103 not found: ID does not exist" containerID="059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.414215 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103"} err="failed to get container status \"059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103\": rpc error: code = NotFound desc = could not find container \"059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103\": container with ID starting with 059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103 not found: ID does not exist" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.414238 4834 scope.go:117] "RemoveContainer" containerID="d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319" Jan 21 16:03:37 crc kubenswrapper[4834]: E0121 16:03:37.415248 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319\": container with ID starting with d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319 not found: ID does not exist" containerID="d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.415300 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319"} err="failed to get container status \"d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319\": rpc error: code = NotFound desc = could not find container \"d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319\": container with ID starting with d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319 not found: ID does not exist" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.415335 4834 scope.go:117] "RemoveContainer" containerID="059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.415567 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103"} err="failed to get container status \"059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103\": rpc error: code = NotFound desc = could not find container \"059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103\": container with ID starting with 059a5fe82e8fa6fab434103dc736ea6e296c60bcace02e4556d3bee1e4b0c103 not found: ID does not exist" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.415599 4834 scope.go:117] "RemoveContainer" containerID="d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.415768 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319"} err="failed to get container status \"d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319\": rpc error: code = NotFound desc = could not find container \"d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319\": container with ID starting with d3fbf5cba5aa31b42f00b0f0c9b71c299bd318445c4d2ac1cb01c425e55a9319 not found: ID does not exist" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.430072 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:03:37 crc kubenswrapper[4834]: E0121 16:03:37.430597 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerName="glance-httpd" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.430617 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerName="glance-httpd" Jan 21 16:03:37 crc kubenswrapper[4834]: E0121 16:03:37.430655 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerName="glance-log" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.430662 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerName="glance-log" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.430825 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerName="glance-httpd" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.430843 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="55530fa6-ce05-4faa-b262-8b30a13b1542" containerName="glance-log" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.431791 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.439597 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.451039 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.524775 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.524826 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhkdz\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-kube-api-access-vhkdz\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.524859 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.524918 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.524950 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.524981 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-ceph\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.525001 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-logs\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.626125 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.626173 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.626213 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-ceph\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.626239 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-logs\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.626285 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.626312 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhkdz\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-kube-api-access-vhkdz\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.626340 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.627655 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-logs\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.627980 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.631360 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.632198 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-ceph\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.633303 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.635435 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.646959 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhkdz\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-kube-api-access-vhkdz\") pod \"glance-default-internal-api-0\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:03:37 crc kubenswrapper[4834]: I0121 16:03:37.767642 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:38 crc kubenswrapper[4834]: I0121 16:03:38.335284 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55530fa6-ce05-4faa-b262-8b30a13b1542" path="/var/lib/kubelet/pods/55530fa6-ce05-4faa-b262-8b30a13b1542/volumes" Jan 21 16:03:38 crc kubenswrapper[4834]: I0121 16:03:38.354000 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:03:38 crc kubenswrapper[4834]: I0121 16:03:38.359481 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31bd15e2-d0e6-481c-af91-7959a5ea55b5","Type":"ContainerStarted","Data":"0d064c168b9b89f729fda3eb70534de975c4fe51f00a37e3550137fc504ccdf7"} Jan 21 16:03:38 crc kubenswrapper[4834]: W0121 16:03:38.366256 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cde9223_344d_47b3_afc8_295962641499.slice/crio-0b45d55b4e2584b3f08897e9db3570b006542d3736e121c39e8cdfdcf03327d0 WatchSource:0}: Error finding container 0b45d55b4e2584b3f08897e9db3570b006542d3736e121c39e8cdfdcf03327d0: Status 404 returned error can't find the container with id 0b45d55b4e2584b3f08897e9db3570b006542d3736e121c39e8cdfdcf03327d0 Jan 21 16:03:38 crc kubenswrapper[4834]: I0121 16:03:38.391444 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.3914174040000002 podStartE2EDuration="3.391417404s" podCreationTimestamp="2026-01-21 16:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:38.387224343 +0000 UTC m=+5564.361573398" watchObservedRunningTime="2026-01-21 16:03:38.391417404 +0000 UTC m=+5564.365766449" Jan 21 16:03:39 crc kubenswrapper[4834]: I0121 16:03:39.371912 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3cde9223-344d-47b3-afc8-295962641499","Type":"ContainerStarted","Data":"5aa6aa3e232c7fc2eabf7cd295e61ee64b75142ca5801d38b906eee1ce90e604"} Jan 21 16:03:39 crc kubenswrapper[4834]: I0121 16:03:39.372447 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3cde9223-344d-47b3-afc8-295962641499","Type":"ContainerStarted","Data":"0b45d55b4e2584b3f08897e9db3570b006542d3736e121c39e8cdfdcf03327d0"} Jan 21 16:03:40 crc kubenswrapper[4834]: I0121 16:03:40.383630 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3cde9223-344d-47b3-afc8-295962641499","Type":"ContainerStarted","Data":"d2ee4ef26d47f2a6e9417441cc5f03cdf22d68c010e0a68999890d1384dd9f05"} Jan 21 16:03:40 crc kubenswrapper[4834]: I0121 16:03:40.437052 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.4370142 podStartE2EDuration="3.4370142s" podCreationTimestamp="2026-01-21 16:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:03:40.405579769 +0000 UTC m=+5566.379928824" watchObservedRunningTime="2026-01-21 16:03:40.4370142 +0000 UTC m=+5566.411363255" Jan 21 16:03:41 crc kubenswrapper[4834]: I0121 16:03:41.940137 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:03:42 crc kubenswrapper[4834]: I0121 16:03:42.011900 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bd45554d5-rnmzt"] Jan 21 16:03:42 crc kubenswrapper[4834]: I0121 16:03:42.012177 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" podUID="acb169ac-8b52-4db7-a063-7d5a45b66151" containerName="dnsmasq-dns" containerID="cri-o://be519ac11eacfed5627a9ea962b3b3ee6a99ddee0e55f106cf88708ea36ec657" gracePeriod=10 Jan 21 16:03:42 crc kubenswrapper[4834]: I0121 16:03:42.405455 4834 generic.go:334] "Generic (PLEG): container finished" podID="acb169ac-8b52-4db7-a063-7d5a45b66151" containerID="be519ac11eacfed5627a9ea962b3b3ee6a99ddee0e55f106cf88708ea36ec657" exitCode=0 Jan 21 16:03:42 crc kubenswrapper[4834]: I0121 16:03:42.405548 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" event={"ID":"acb169ac-8b52-4db7-a063-7d5a45b66151","Type":"ContainerDied","Data":"be519ac11eacfed5627a9ea962b3b3ee6a99ddee0e55f106cf88708ea36ec657"} Jan 21 16:03:42 crc kubenswrapper[4834]: I0121 16:03:42.985084 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.028545 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-nb\") pod \"acb169ac-8b52-4db7-a063-7d5a45b66151\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.028598 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-dns-svc\") pod \"acb169ac-8b52-4db7-a063-7d5a45b66151\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.028717 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-config\") pod \"acb169ac-8b52-4db7-a063-7d5a45b66151\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.028741 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-sb\") pod \"acb169ac-8b52-4db7-a063-7d5a45b66151\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.028847 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n69d7\" (UniqueName: \"kubernetes.io/projected/acb169ac-8b52-4db7-a063-7d5a45b66151-kube-api-access-n69d7\") pod \"acb169ac-8b52-4db7-a063-7d5a45b66151\" (UID: \"acb169ac-8b52-4db7-a063-7d5a45b66151\") " Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.050217 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb169ac-8b52-4db7-a063-7d5a45b66151-kube-api-access-n69d7" (OuterVolumeSpecName: "kube-api-access-n69d7") pod "acb169ac-8b52-4db7-a063-7d5a45b66151" (UID: "acb169ac-8b52-4db7-a063-7d5a45b66151"). InnerVolumeSpecName "kube-api-access-n69d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.077772 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "acb169ac-8b52-4db7-a063-7d5a45b66151" (UID: "acb169ac-8b52-4db7-a063-7d5a45b66151"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.081369 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "acb169ac-8b52-4db7-a063-7d5a45b66151" (UID: "acb169ac-8b52-4db7-a063-7d5a45b66151"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.083356 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "acb169ac-8b52-4db7-a063-7d5a45b66151" (UID: "acb169ac-8b52-4db7-a063-7d5a45b66151"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.083597 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-config" (OuterVolumeSpecName: "config") pod "acb169ac-8b52-4db7-a063-7d5a45b66151" (UID: "acb169ac-8b52-4db7-a063-7d5a45b66151"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.131417 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n69d7\" (UniqueName: \"kubernetes.io/projected/acb169ac-8b52-4db7-a063-7d5a45b66151-kube-api-access-n69d7\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.131454 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.131469 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.131480 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.131491 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb169ac-8b52-4db7-a063-7d5a45b66151-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.417030 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" event={"ID":"acb169ac-8b52-4db7-a063-7d5a45b66151","Type":"ContainerDied","Data":"3f1a885ee7bf141ab8e10f8a6cb8c63d20e3f9cd35d77aebeb9b75ddf439cc67"} Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.417074 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd45554d5-rnmzt" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.417103 4834 scope.go:117] "RemoveContainer" containerID="be519ac11eacfed5627a9ea962b3b3ee6a99ddee0e55f106cf88708ea36ec657" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.455396 4834 scope.go:117] "RemoveContainer" containerID="b43b6b41ecf0a207429790a99b02805eebbc226c496c90f607d3e1889e7ef476" Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.464622 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bd45554d5-rnmzt"] Jan 21 16:03:43 crc kubenswrapper[4834]: I0121 16:03:43.470119 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bd45554d5-rnmzt"] Jan 21 16:03:44 crc kubenswrapper[4834]: I0121 16:03:44.361179 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb169ac-8b52-4db7-a063-7d5a45b66151" path="/var/lib/kubelet/pods/acb169ac-8b52-4db7-a063-7d5a45b66151/volumes" Jan 21 16:03:45 crc kubenswrapper[4834]: I0121 16:03:45.669561 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:03:45 crc kubenswrapper[4834]: I0121 16:03:45.669634 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:03:45 crc kubenswrapper[4834]: I0121 16:03:45.698213 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:03:45 crc kubenswrapper[4834]: I0121 16:03:45.716416 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:03:46 crc kubenswrapper[4834]: I0121 16:03:46.442837 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:03:46 crc kubenswrapper[4834]: I0121 16:03:46.443597 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:03:47 crc kubenswrapper[4834]: I0121 16:03:47.768367 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:47 crc kubenswrapper[4834]: I0121 16:03:47.768439 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:47 crc kubenswrapper[4834]: I0121 16:03:47.799761 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:47 crc kubenswrapper[4834]: I0121 16:03:47.807877 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:48 crc kubenswrapper[4834]: I0121 16:03:48.457408 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:03:48 crc kubenswrapper[4834]: I0121 16:03:48.457442 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:03:48 crc kubenswrapper[4834]: I0121 16:03:48.458965 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:48 crc kubenswrapper[4834]: I0121 16:03:48.459022 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:48 crc kubenswrapper[4834]: I0121 16:03:48.676655 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:03:48 crc kubenswrapper[4834]: I0121 16:03:48.678434 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:03:50 crc kubenswrapper[4834]: I0121 16:03:50.454982 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:03:50 crc kubenswrapper[4834]: I0121 16:03:50.468435 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.268508 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-jtkvh"] Jan 21 16:04:00 crc kubenswrapper[4834]: E0121 16:04:00.269548 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb169ac-8b52-4db7-a063-7d5a45b66151" containerName="dnsmasq-dns" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.269569 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb169ac-8b52-4db7-a063-7d5a45b66151" containerName="dnsmasq-dns" Jan 21 16:04:00 crc kubenswrapper[4834]: E0121 16:04:00.269585 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb169ac-8b52-4db7-a063-7d5a45b66151" containerName="init" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.269594 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb169ac-8b52-4db7-a063-7d5a45b66151" containerName="init" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.269834 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb169ac-8b52-4db7-a063-7d5a45b66151" containerName="dnsmasq-dns" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.270590 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.283106 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-jtkvh"] Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.356071 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdmwh\" (UniqueName: \"kubernetes.io/projected/c0e078da-a5e4-49c9-8724-560b2354f424-kube-api-access-fdmwh\") pod \"placement-db-create-jtkvh\" (UID: \"c0e078da-a5e4-49c9-8724-560b2354f424\") " pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.356236 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e078da-a5e4-49c9-8724-560b2354f424-operator-scripts\") pod \"placement-db-create-jtkvh\" (UID: \"c0e078da-a5e4-49c9-8724-560b2354f424\") " pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.359688 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e44c-account-create-update-rg8tm"] Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.361541 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.363791 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.369606 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e44c-account-create-update-rg8tm"] Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.458631 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e078da-a5e4-49c9-8724-560b2354f424-operator-scripts\") pod \"placement-db-create-jtkvh\" (UID: \"c0e078da-a5e4-49c9-8724-560b2354f424\") " pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.458678 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-operator-scripts\") pod \"placement-e44c-account-create-update-rg8tm\" (UID: \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\") " pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.458740 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwbm6\" (UniqueName: \"kubernetes.io/projected/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-kube-api-access-qwbm6\") pod \"placement-e44c-account-create-update-rg8tm\" (UID: \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\") " pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.458805 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdmwh\" (UniqueName: \"kubernetes.io/projected/c0e078da-a5e4-49c9-8724-560b2354f424-kube-api-access-fdmwh\") pod \"placement-db-create-jtkvh\" (UID: \"c0e078da-a5e4-49c9-8724-560b2354f424\") " pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.460430 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e078da-a5e4-49c9-8724-560b2354f424-operator-scripts\") pod \"placement-db-create-jtkvh\" (UID: \"c0e078da-a5e4-49c9-8724-560b2354f424\") " pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.477084 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdmwh\" (UniqueName: \"kubernetes.io/projected/c0e078da-a5e4-49c9-8724-560b2354f424-kube-api-access-fdmwh\") pod \"placement-db-create-jtkvh\" (UID: \"c0e078da-a5e4-49c9-8724-560b2354f424\") " pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.559858 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-operator-scripts\") pod \"placement-e44c-account-create-update-rg8tm\" (UID: \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\") " pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.559970 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwbm6\" (UniqueName: \"kubernetes.io/projected/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-kube-api-access-qwbm6\") pod \"placement-e44c-account-create-update-rg8tm\" (UID: \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\") " pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.560677 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-operator-scripts\") pod \"placement-e44c-account-create-update-rg8tm\" (UID: \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\") " pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.575183 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwbm6\" (UniqueName: \"kubernetes.io/projected/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-kube-api-access-qwbm6\") pod \"placement-e44c-account-create-update-rg8tm\" (UID: \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\") " pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.597644 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:00 crc kubenswrapper[4834]: I0121 16:04:00.679401 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:01 crc kubenswrapper[4834]: I0121 16:04:01.067540 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-jtkvh"] Jan 21 16:04:01 crc kubenswrapper[4834]: W0121 16:04:01.205227 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d12dc70_c2d4_42e8_b021_a500e3f3dabe.slice/crio-4a46853910e565738cbc4a3fafebf6bf959c75a96b23311f405427970ef088b7 WatchSource:0}: Error finding container 4a46853910e565738cbc4a3fafebf6bf959c75a96b23311f405427970ef088b7: Status 404 returned error can't find the container with id 4a46853910e565738cbc4a3fafebf6bf959c75a96b23311f405427970ef088b7 Jan 21 16:04:01 crc kubenswrapper[4834]: I0121 16:04:01.208595 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e44c-account-create-update-rg8tm"] Jan 21 16:04:01 crc kubenswrapper[4834]: E0121 16:04:01.508140 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0e078da_a5e4_49c9_8724_560b2354f424.slice/crio-conmon-a71ad9235bf3a613cae2f1e141e1cfffcd4df65560cab09adbd772c85d3b0dda.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0e078da_a5e4_49c9_8724_560b2354f424.slice/crio-a71ad9235bf3a613cae2f1e141e1cfffcd4df65560cab09adbd772c85d3b0dda.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:04:01 crc kubenswrapper[4834]: I0121 16:04:01.592103 4834 generic.go:334] "Generic (PLEG): container finished" podID="c0e078da-a5e4-49c9-8724-560b2354f424" containerID="a71ad9235bf3a613cae2f1e141e1cfffcd4df65560cab09adbd772c85d3b0dda" exitCode=0 Jan 21 16:04:01 crc kubenswrapper[4834]: I0121 16:04:01.592183 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jtkvh" event={"ID":"c0e078da-a5e4-49c9-8724-560b2354f424","Type":"ContainerDied","Data":"a71ad9235bf3a613cae2f1e141e1cfffcd4df65560cab09adbd772c85d3b0dda"} Jan 21 16:04:01 crc kubenswrapper[4834]: I0121 16:04:01.592534 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jtkvh" event={"ID":"c0e078da-a5e4-49c9-8724-560b2354f424","Type":"ContainerStarted","Data":"a7fe45e5c95710ad7be633fb19b7ed56babc9053c9507a10c135305b9abd1ffd"} Jan 21 16:04:01 crc kubenswrapper[4834]: I0121 16:04:01.595486 4834 generic.go:334] "Generic (PLEG): container finished" podID="1d12dc70-c2d4-42e8-b021-a500e3f3dabe" containerID="82005a3d5c913e95e851ea6893308e98bc455fb856413999fb6ebe7bc694e2cf" exitCode=0 Jan 21 16:04:01 crc kubenswrapper[4834]: I0121 16:04:01.595526 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e44c-account-create-update-rg8tm" event={"ID":"1d12dc70-c2d4-42e8-b021-a500e3f3dabe","Type":"ContainerDied","Data":"82005a3d5c913e95e851ea6893308e98bc455fb856413999fb6ebe7bc694e2cf"} Jan 21 16:04:01 crc kubenswrapper[4834]: I0121 16:04:01.595555 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e44c-account-create-update-rg8tm" event={"ID":"1d12dc70-c2d4-42e8-b021-a500e3f3dabe","Type":"ContainerStarted","Data":"4a46853910e565738cbc4a3fafebf6bf959c75a96b23311f405427970ef088b7"} Jan 21 16:04:02 crc kubenswrapper[4834]: I0121 16:04:02.978020 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.054548 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.120858 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdmwh\" (UniqueName: \"kubernetes.io/projected/c0e078da-a5e4-49c9-8724-560b2354f424-kube-api-access-fdmwh\") pod \"c0e078da-a5e4-49c9-8724-560b2354f424\" (UID: \"c0e078da-a5e4-49c9-8724-560b2354f424\") " Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.121028 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e078da-a5e4-49c9-8724-560b2354f424-operator-scripts\") pod \"c0e078da-a5e4-49c9-8724-560b2354f424\" (UID: \"c0e078da-a5e4-49c9-8724-560b2354f424\") " Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.121798 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e078da-a5e4-49c9-8724-560b2354f424-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0e078da-a5e4-49c9-8724-560b2354f424" (UID: "c0e078da-a5e4-49c9-8724-560b2354f424"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.122145 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e078da-a5e4-49c9-8724-560b2354f424-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.139116 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e078da-a5e4-49c9-8724-560b2354f424-kube-api-access-fdmwh" (OuterVolumeSpecName: "kube-api-access-fdmwh") pod "c0e078da-a5e4-49c9-8724-560b2354f424" (UID: "c0e078da-a5e4-49c9-8724-560b2354f424"). InnerVolumeSpecName "kube-api-access-fdmwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.223641 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-operator-scripts\") pod \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\" (UID: \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\") " Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.223787 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwbm6\" (UniqueName: \"kubernetes.io/projected/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-kube-api-access-qwbm6\") pod \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\" (UID: \"1d12dc70-c2d4-42e8-b021-a500e3f3dabe\") " Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.224121 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d12dc70-c2d4-42e8-b021-a500e3f3dabe" (UID: "1d12dc70-c2d4-42e8-b021-a500e3f3dabe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.224267 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdmwh\" (UniqueName: \"kubernetes.io/projected/c0e078da-a5e4-49c9-8724-560b2354f424-kube-api-access-fdmwh\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.224288 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.227190 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-kube-api-access-qwbm6" (OuterVolumeSpecName: "kube-api-access-qwbm6") pod "1d12dc70-c2d4-42e8-b021-a500e3f3dabe" (UID: "1d12dc70-c2d4-42e8-b021-a500e3f3dabe"). InnerVolumeSpecName "kube-api-access-qwbm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.326087 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwbm6\" (UniqueName: \"kubernetes.io/projected/1d12dc70-c2d4-42e8-b021-a500e3f3dabe-kube-api-access-qwbm6\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.621986 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jtkvh" event={"ID":"c0e078da-a5e4-49c9-8724-560b2354f424","Type":"ContainerDied","Data":"a7fe45e5c95710ad7be633fb19b7ed56babc9053c9507a10c135305b9abd1ffd"} Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.622038 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7fe45e5c95710ad7be633fb19b7ed56babc9053c9507a10c135305b9abd1ffd" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.622083 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jtkvh" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.624433 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e44c-account-create-update-rg8tm" event={"ID":"1d12dc70-c2d4-42e8-b021-a500e3f3dabe","Type":"ContainerDied","Data":"4a46853910e565738cbc4a3fafebf6bf959c75a96b23311f405427970ef088b7"} Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.624461 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a46853910e565738cbc4a3fafebf6bf959c75a96b23311f405427970ef088b7" Jan 21 16:04:03 crc kubenswrapper[4834]: I0121 16:04:03.624521 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e44c-account-create-update-rg8tm" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.712566 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-796897d589-vfltd"] Jan 21 16:04:05 crc kubenswrapper[4834]: E0121 16:04:05.713428 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e078da-a5e4-49c9-8724-560b2354f424" containerName="mariadb-database-create" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.713446 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e078da-a5e4-49c9-8724-560b2354f424" containerName="mariadb-database-create" Jan 21 16:04:05 crc kubenswrapper[4834]: E0121 16:04:05.713470 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d12dc70-c2d4-42e8-b021-a500e3f3dabe" containerName="mariadb-account-create-update" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.713477 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d12dc70-c2d4-42e8-b021-a500e3f3dabe" containerName="mariadb-account-create-update" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.713681 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e078da-a5e4-49c9-8724-560b2354f424" containerName="mariadb-database-create" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.713703 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d12dc70-c2d4-42e8-b021-a500e3f3dabe" containerName="mariadb-account-create-update" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.714906 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.722561 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-796897d589-vfltd"] Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.748017 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-spb6s"] Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.749226 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.751798 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.752197 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.756073 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-985f9" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.777724 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-spb6s"] Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869216 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-nb\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869276 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpjfq\" (UniqueName: \"kubernetes.io/projected/733dfef0-c07f-4030-858b-c5a1813ccaaf-kube-api-access-rpjfq\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869294 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-dns-svc\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869337 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghw8q\" (UniqueName: \"kubernetes.io/projected/38472bc9-79fa-40c4-9319-e4999e158433-kube-api-access-ghw8q\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869360 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-config-data\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869380 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-combined-ca-bundle\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869401 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-scripts\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869429 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-sb\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869465 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-config\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.869510 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38472bc9-79fa-40c4-9319-e4999e158433-logs\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972011 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-config-data\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972102 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-combined-ca-bundle\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972236 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-scripts\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972294 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-sb\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972356 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-config\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972416 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38472bc9-79fa-40c4-9319-e4999e158433-logs\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972467 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-nb\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972508 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpjfq\" (UniqueName: \"kubernetes.io/projected/733dfef0-c07f-4030-858b-c5a1813ccaaf-kube-api-access-rpjfq\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972544 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-dns-svc\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.972592 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghw8q\" (UniqueName: \"kubernetes.io/projected/38472bc9-79fa-40c4-9319-e4999e158433-kube-api-access-ghw8q\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.973509 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-sb\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.973550 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-config\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.973681 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-nb\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.973948 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-dns-svc\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.974334 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38472bc9-79fa-40c4-9319-e4999e158433-logs\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.983630 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-config-data\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.984359 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-scripts\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.986647 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-combined-ca-bundle\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:05 crc kubenswrapper[4834]: I0121 16:04:05.994619 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghw8q\" (UniqueName: \"kubernetes.io/projected/38472bc9-79fa-40c4-9319-e4999e158433-kube-api-access-ghw8q\") pod \"placement-db-sync-spb6s\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:06 crc kubenswrapper[4834]: I0121 16:04:06.020165 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpjfq\" (UniqueName: \"kubernetes.io/projected/733dfef0-c07f-4030-858b-c5a1813ccaaf-kube-api-access-rpjfq\") pod \"dnsmasq-dns-796897d589-vfltd\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:06 crc kubenswrapper[4834]: I0121 16:04:06.043650 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:06 crc kubenswrapper[4834]: I0121 16:04:06.080788 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:06 crc kubenswrapper[4834]: I0121 16:04:06.429175 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-spb6s"] Jan 21 16:04:06 crc kubenswrapper[4834]: I0121 16:04:06.508299 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-796897d589-vfltd"] Jan 21 16:04:06 crc kubenswrapper[4834]: W0121 16:04:06.514411 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod733dfef0_c07f_4030_858b_c5a1813ccaaf.slice/crio-b0ea5a60fe3ed49f59c8763c43d00527516a2a620abe96c58ed70e601de61e83 WatchSource:0}: Error finding container b0ea5a60fe3ed49f59c8763c43d00527516a2a620abe96c58ed70e601de61e83: Status 404 returned error can't find the container with id b0ea5a60fe3ed49f59c8763c43d00527516a2a620abe96c58ed70e601de61e83 Jan 21 16:04:06 crc kubenswrapper[4834]: I0121 16:04:06.649371 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-spb6s" event={"ID":"38472bc9-79fa-40c4-9319-e4999e158433","Type":"ContainerStarted","Data":"2cb8b628d3c0c415157e7f679a10b1cddc63517a06df551807c234901c5edfd0"} Jan 21 16:04:06 crc kubenswrapper[4834]: I0121 16:04:06.649411 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-spb6s" event={"ID":"38472bc9-79fa-40c4-9319-e4999e158433","Type":"ContainerStarted","Data":"d30543f78813f33c316d1ec7075e2b84710bcb69e6bfbeaf447c5b1161cada0c"} Jan 21 16:04:06 crc kubenswrapper[4834]: I0121 16:04:06.653390 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796897d589-vfltd" event={"ID":"733dfef0-c07f-4030-858b-c5a1813ccaaf","Type":"ContainerStarted","Data":"b0ea5a60fe3ed49f59c8763c43d00527516a2a620abe96c58ed70e601de61e83"} Jan 21 16:04:06 crc kubenswrapper[4834]: I0121 16:04:06.673034 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-spb6s" podStartSLOduration=1.673012935 podStartE2EDuration="1.673012935s" podCreationTimestamp="2026-01-21 16:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:04:06.665712818 +0000 UTC m=+5592.640061883" watchObservedRunningTime="2026-01-21 16:04:06.673012935 +0000 UTC m=+5592.647361980" Jan 21 16:04:07 crc kubenswrapper[4834]: I0121 16:04:07.666275 4834 generic.go:334] "Generic (PLEG): container finished" podID="733dfef0-c07f-4030-858b-c5a1813ccaaf" containerID="fbff18b0cbd53887636292fa316134997bf1e9fe0d54cb2c80c0b75d9f0da8ea" exitCode=0 Jan 21 16:04:07 crc kubenswrapper[4834]: I0121 16:04:07.666337 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796897d589-vfltd" event={"ID":"733dfef0-c07f-4030-858b-c5a1813ccaaf","Type":"ContainerDied","Data":"fbff18b0cbd53887636292fa316134997bf1e9fe0d54cb2c80c0b75d9f0da8ea"} Jan 21 16:04:07 crc kubenswrapper[4834]: I0121 16:04:07.842948 4834 scope.go:117] "RemoveContainer" containerID="cc95a067551439a83e1cb08dbb0f94e548fee9c5265fd1cee1a72ba2e6ad87b9" Jan 21 16:04:08 crc kubenswrapper[4834]: I0121 16:04:08.676512 4834 generic.go:334] "Generic (PLEG): container finished" podID="38472bc9-79fa-40c4-9319-e4999e158433" containerID="2cb8b628d3c0c415157e7f679a10b1cddc63517a06df551807c234901c5edfd0" exitCode=0 Jan 21 16:04:08 crc kubenswrapper[4834]: I0121 16:04:08.676859 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-spb6s" event={"ID":"38472bc9-79fa-40c4-9319-e4999e158433","Type":"ContainerDied","Data":"2cb8b628d3c0c415157e7f679a10b1cddc63517a06df551807c234901c5edfd0"} Jan 21 16:04:08 crc kubenswrapper[4834]: I0121 16:04:08.680565 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796897d589-vfltd" event={"ID":"733dfef0-c07f-4030-858b-c5a1813ccaaf","Type":"ContainerStarted","Data":"2590c4c18430891c6f3b8350e3447ac0066d30e7855cb0feb033b6433e6d2dfa"} Jan 21 16:04:08 crc kubenswrapper[4834]: I0121 16:04:08.681384 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.069304 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.093165 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-796897d589-vfltd" podStartSLOduration=5.093145166 podStartE2EDuration="5.093145166s" podCreationTimestamp="2026-01-21 16:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:04:08.721058428 +0000 UTC m=+5594.695407483" watchObservedRunningTime="2026-01-21 16:04:10.093145166 +0000 UTC m=+5596.067494211" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.150493 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-combined-ca-bundle\") pod \"38472bc9-79fa-40c4-9319-e4999e158433\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.150547 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghw8q\" (UniqueName: \"kubernetes.io/projected/38472bc9-79fa-40c4-9319-e4999e158433-kube-api-access-ghw8q\") pod \"38472bc9-79fa-40c4-9319-e4999e158433\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.150633 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38472bc9-79fa-40c4-9319-e4999e158433-logs\") pod \"38472bc9-79fa-40c4-9319-e4999e158433\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.150669 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-scripts\") pod \"38472bc9-79fa-40c4-9319-e4999e158433\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.151336 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38472bc9-79fa-40c4-9319-e4999e158433-logs" (OuterVolumeSpecName: "logs") pod "38472bc9-79fa-40c4-9319-e4999e158433" (UID: "38472bc9-79fa-40c4-9319-e4999e158433"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.150740 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-config-data\") pod \"38472bc9-79fa-40c4-9319-e4999e158433\" (UID: \"38472bc9-79fa-40c4-9319-e4999e158433\") " Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.152209 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38472bc9-79fa-40c4-9319-e4999e158433-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.157148 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-scripts" (OuterVolumeSpecName: "scripts") pod "38472bc9-79fa-40c4-9319-e4999e158433" (UID: "38472bc9-79fa-40c4-9319-e4999e158433"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.157313 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38472bc9-79fa-40c4-9319-e4999e158433-kube-api-access-ghw8q" (OuterVolumeSpecName: "kube-api-access-ghw8q") pod "38472bc9-79fa-40c4-9319-e4999e158433" (UID: "38472bc9-79fa-40c4-9319-e4999e158433"). InnerVolumeSpecName "kube-api-access-ghw8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.175997 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38472bc9-79fa-40c4-9319-e4999e158433" (UID: "38472bc9-79fa-40c4-9319-e4999e158433"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.178423 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-config-data" (OuterVolumeSpecName: "config-data") pod "38472bc9-79fa-40c4-9319-e4999e158433" (UID: "38472bc9-79fa-40c4-9319-e4999e158433"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.254425 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.254487 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.254506 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghw8q\" (UniqueName: \"kubernetes.io/projected/38472bc9-79fa-40c4-9319-e4999e158433-kube-api-access-ghw8q\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.254520 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38472bc9-79fa-40c4-9319-e4999e158433-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.713995 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-spb6s" event={"ID":"38472bc9-79fa-40c4-9319-e4999e158433","Type":"ContainerDied","Data":"d30543f78813f33c316d1ec7075e2b84710bcb69e6bfbeaf447c5b1161cada0c"} Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.714090 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d30543f78813f33c316d1ec7075e2b84710bcb69e6bfbeaf447c5b1161cada0c" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.714125 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-spb6s" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.784119 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-846f6d55db-7dwsr"] Jan 21 16:04:10 crc kubenswrapper[4834]: E0121 16:04:10.784528 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38472bc9-79fa-40c4-9319-e4999e158433" containerName="placement-db-sync" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.784546 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="38472bc9-79fa-40c4-9319-e4999e158433" containerName="placement-db-sync" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.784757 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="38472bc9-79fa-40c4-9319-e4999e158433" containerName="placement-db-sync" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.785777 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.792414 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.792804 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-985f9" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.792954 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.810061 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-846f6d55db-7dwsr"] Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.866791 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca65b952-f3c1-4b89-b0ba-63085e701161-scripts\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.866883 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca65b952-f3c1-4b89-b0ba-63085e701161-combined-ca-bundle\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.866940 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca65b952-f3c1-4b89-b0ba-63085e701161-logs\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.866985 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca65b952-f3c1-4b89-b0ba-63085e701161-config-data\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.867008 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxgk7\" (UniqueName: \"kubernetes.io/projected/ca65b952-f3c1-4b89-b0ba-63085e701161-kube-api-access-xxgk7\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.968629 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca65b952-f3c1-4b89-b0ba-63085e701161-logs\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.968713 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca65b952-f3c1-4b89-b0ba-63085e701161-config-data\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.968742 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxgk7\" (UniqueName: \"kubernetes.io/projected/ca65b952-f3c1-4b89-b0ba-63085e701161-kube-api-access-xxgk7\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.968796 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca65b952-f3c1-4b89-b0ba-63085e701161-scripts\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.968845 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca65b952-f3c1-4b89-b0ba-63085e701161-combined-ca-bundle\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.970167 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca65b952-f3c1-4b89-b0ba-63085e701161-logs\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.973410 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca65b952-f3c1-4b89-b0ba-63085e701161-scripts\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.973631 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca65b952-f3c1-4b89-b0ba-63085e701161-combined-ca-bundle\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.974234 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca65b952-f3c1-4b89-b0ba-63085e701161-config-data\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:10 crc kubenswrapper[4834]: I0121 16:04:10.985039 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxgk7\" (UniqueName: \"kubernetes.io/projected/ca65b952-f3c1-4b89-b0ba-63085e701161-kube-api-access-xxgk7\") pod \"placement-846f6d55db-7dwsr\" (UID: \"ca65b952-f3c1-4b89-b0ba-63085e701161\") " pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:11 crc kubenswrapper[4834]: I0121 16:04:11.150511 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:11 crc kubenswrapper[4834]: W0121 16:04:11.642178 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca65b952_f3c1_4b89_b0ba_63085e701161.slice/crio-4b6059be9b1df40de863b991ec5805b6f5f512ee5bbd1e8750a0947d8f191021 WatchSource:0}: Error finding container 4b6059be9b1df40de863b991ec5805b6f5f512ee5bbd1e8750a0947d8f191021: Status 404 returned error can't find the container with id 4b6059be9b1df40de863b991ec5805b6f5f512ee5bbd1e8750a0947d8f191021 Jan 21 16:04:11 crc kubenswrapper[4834]: I0121 16:04:11.642837 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-846f6d55db-7dwsr"] Jan 21 16:04:11 crc kubenswrapper[4834]: I0121 16:04:11.722885 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846f6d55db-7dwsr" event={"ID":"ca65b952-f3c1-4b89-b0ba-63085e701161","Type":"ContainerStarted","Data":"4b6059be9b1df40de863b991ec5805b6f5f512ee5bbd1e8750a0947d8f191021"} Jan 21 16:04:12 crc kubenswrapper[4834]: I0121 16:04:12.741524 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846f6d55db-7dwsr" event={"ID":"ca65b952-f3c1-4b89-b0ba-63085e701161","Type":"ContainerStarted","Data":"d47aaf9fb6a5be86bc7afbaafd62a20b4359b2fd74f03297c27eff64c1001f79"} Jan 21 16:04:12 crc kubenswrapper[4834]: I0121 16:04:12.742044 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:12 crc kubenswrapper[4834]: I0121 16:04:12.742090 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846f6d55db-7dwsr" event={"ID":"ca65b952-f3c1-4b89-b0ba-63085e701161","Type":"ContainerStarted","Data":"8b93339f4c9cc3483ee4c347844bb001ff72072381f2efa730ad8b65e911cde3"} Jan 21 16:04:12 crc kubenswrapper[4834]: I0121 16:04:12.742122 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:12 crc kubenswrapper[4834]: I0121 16:04:12.769442 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-846f6d55db-7dwsr" podStartSLOduration=2.769409362 podStartE2EDuration="2.769409362s" podCreationTimestamp="2026-01-21 16:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:04:12.76936395 +0000 UTC m=+5598.743713005" watchObservedRunningTime="2026-01-21 16:04:12.769409362 +0000 UTC m=+5598.743758437" Jan 21 16:04:16 crc kubenswrapper[4834]: I0121 16:04:16.046235 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:04:16 crc kubenswrapper[4834]: I0121 16:04:16.116047 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74648d7f5-f6c4t"] Jan 21 16:04:16 crc kubenswrapper[4834]: I0121 16:04:16.116369 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" podUID="c3267acd-477e-4723-af63-957f157352df" containerName="dnsmasq-dns" containerID="cri-o://26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb" gracePeriod=10 Jan 21 16:04:16 crc kubenswrapper[4834]: I0121 16:04:16.938968 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" podUID="c3267acd-477e-4723-af63-957f157352df" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.39:5353: connect: connection refused" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.197408 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.237744 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-nb\") pod \"c3267acd-477e-4723-af63-957f157352df\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.237866 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-dns-svc\") pod \"c3267acd-477e-4723-af63-957f157352df\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.237956 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-sb\") pod \"c3267acd-477e-4723-af63-957f157352df\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.238009 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5gv\" (UniqueName: \"kubernetes.io/projected/c3267acd-477e-4723-af63-957f157352df-kube-api-access-mg5gv\") pod \"c3267acd-477e-4723-af63-957f157352df\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.238077 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-config\") pod \"c3267acd-477e-4723-af63-957f157352df\" (UID: \"c3267acd-477e-4723-af63-957f157352df\") " Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.248467 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3267acd-477e-4723-af63-957f157352df-kube-api-access-mg5gv" (OuterVolumeSpecName: "kube-api-access-mg5gv") pod "c3267acd-477e-4723-af63-957f157352df" (UID: "c3267acd-477e-4723-af63-957f157352df"). InnerVolumeSpecName "kube-api-access-mg5gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.295538 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3267acd-477e-4723-af63-957f157352df" (UID: "c3267acd-477e-4723-af63-957f157352df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.300826 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3267acd-477e-4723-af63-957f157352df" (UID: "c3267acd-477e-4723-af63-957f157352df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.301416 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3267acd-477e-4723-af63-957f157352df" (UID: "c3267acd-477e-4723-af63-957f157352df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.313025 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-config" (OuterVolumeSpecName: "config") pod "c3267acd-477e-4723-af63-957f157352df" (UID: "c3267acd-477e-4723-af63-957f157352df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.340347 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.340387 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5gv\" (UniqueName: \"kubernetes.io/projected/c3267acd-477e-4723-af63-957f157352df-kube-api-access-mg5gv\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.340401 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.340410 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.340421 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3267acd-477e-4723-af63-957f157352df-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.821383 4834 generic.go:334] "Generic (PLEG): container finished" podID="c3267acd-477e-4723-af63-957f157352df" containerID="26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb" exitCode=0 Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.821420 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.821439 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" event={"ID":"c3267acd-477e-4723-af63-957f157352df","Type":"ContainerDied","Data":"26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb"} Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.821617 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74648d7f5-f6c4t" event={"ID":"c3267acd-477e-4723-af63-957f157352df","Type":"ContainerDied","Data":"5d8a2fcb7b3f1f082ae5df61aa150565285b1723e58fd85de57285bdb11b683d"} Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.821642 4834 scope.go:117] "RemoveContainer" containerID="26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.915061 4834 scope.go:117] "RemoveContainer" containerID="43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.919172 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74648d7f5-f6c4t"] Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.929877 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74648d7f5-f6c4t"] Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.945339 4834 scope.go:117] "RemoveContainer" containerID="26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb" Jan 21 16:04:17 crc kubenswrapper[4834]: E0121 16:04:17.945735 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb\": container with ID starting with 26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb not found: ID does not exist" containerID="26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.945772 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb"} err="failed to get container status \"26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb\": rpc error: code = NotFound desc = could not find container \"26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb\": container with ID starting with 26ed7ede092f2e677e95ea3029e15046ca1a28b94f221879e1e660742561b1eb not found: ID does not exist" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.945795 4834 scope.go:117] "RemoveContainer" containerID="43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15" Jan 21 16:04:17 crc kubenswrapper[4834]: E0121 16:04:17.946410 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15\": container with ID starting with 43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15 not found: ID does not exist" containerID="43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15" Jan 21 16:04:17 crc kubenswrapper[4834]: I0121 16:04:17.946495 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15"} err="failed to get container status \"43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15\": rpc error: code = NotFound desc = could not find container \"43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15\": container with ID starting with 43e1d2011a1b3e3cffa7e86a17b3e8a17d711477cfc113905f38d5ac94ccbd15 not found: ID does not exist" Jan 21 16:04:18 crc kubenswrapper[4834]: I0121 16:04:18.336457 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3267acd-477e-4723-af63-957f157352df" path="/var/lib/kubelet/pods/c3267acd-477e-4723-af63-957f157352df/volumes" Jan 21 16:04:42 crc kubenswrapper[4834]: I0121 16:04:42.297284 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:42 crc kubenswrapper[4834]: I0121 16:04:42.351719 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-846f6d55db-7dwsr" Jan 21 16:04:47 crc kubenswrapper[4834]: I0121 16:04:47.114566 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:04:47 crc kubenswrapper[4834]: I0121 16:04:47.116379 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.333529 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fsw9d"] Jan 21 16:05:03 crc kubenswrapper[4834]: E0121 16:05:03.342151 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3267acd-477e-4723-af63-957f157352df" containerName="dnsmasq-dns" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.342185 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3267acd-477e-4723-af63-957f157352df" containerName="dnsmasq-dns" Jan 21 16:05:03 crc kubenswrapper[4834]: E0121 16:05:03.342215 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3267acd-477e-4723-af63-957f157352df" containerName="init" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.342225 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3267acd-477e-4723-af63-957f157352df" containerName="init" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.342453 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3267acd-477e-4723-af63-957f157352df" containerName="dnsmasq-dns" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.343362 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.382041 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fsw9d"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.425572 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wkcxw"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.426638 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.446757 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wkcxw"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.458393 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8580-account-create-update-qk6lj"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.460840 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.462898 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.470008 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8580-account-create-update-qk6lj"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.500030 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5ssk\" (UniqueName: \"kubernetes.io/projected/18b6fba8-c961-4b52-b97f-8189df4a5339-kube-api-access-m5ssk\") pod \"nova-api-db-create-fsw9d\" (UID: \"18b6fba8-c961-4b52-b97f-8189df4a5339\") " pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.500119 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc8138c-cb56-4072-a2cd-709f7feee5af-operator-scripts\") pod \"nova-cell0-db-create-wkcxw\" (UID: \"3fc8138c-cb56-4072-a2cd-709f7feee5af\") " pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.500311 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b6fba8-c961-4b52-b97f-8189df4a5339-operator-scripts\") pod \"nova-api-db-create-fsw9d\" (UID: \"18b6fba8-c961-4b52-b97f-8189df4a5339\") " pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.500529 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws98r\" (UniqueName: \"kubernetes.io/projected/3fc8138c-cb56-4072-a2cd-709f7feee5af-kube-api-access-ws98r\") pod \"nova-cell0-db-create-wkcxw\" (UID: \"3fc8138c-cb56-4072-a2cd-709f7feee5af\") " pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.521401 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7cmlv"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.522443 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.534977 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7cmlv"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.603131 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5ssk\" (UniqueName: \"kubernetes.io/projected/18b6fba8-c961-4b52-b97f-8189df4a5339-kube-api-access-m5ssk\") pod \"nova-api-db-create-fsw9d\" (UID: \"18b6fba8-c961-4b52-b97f-8189df4a5339\") " pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.603218 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc8138c-cb56-4072-a2cd-709f7feee5af-operator-scripts\") pod \"nova-cell0-db-create-wkcxw\" (UID: \"3fc8138c-cb56-4072-a2cd-709f7feee5af\") " pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.603285 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f2378-7abe-45a0-9099-b1f299c2df01-operator-scripts\") pod \"nova-cell1-db-create-7cmlv\" (UID: \"3e3f2378-7abe-45a0-9099-b1f299c2df01\") " pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.603332 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5l6\" (UniqueName: \"kubernetes.io/projected/dac0977e-6a7b-4218-bb8d-409dd7c4732e-kube-api-access-bq5l6\") pod \"nova-api-8580-account-create-update-qk6lj\" (UID: \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\") " pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.603365 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac0977e-6a7b-4218-bb8d-409dd7c4732e-operator-scripts\") pod \"nova-api-8580-account-create-update-qk6lj\" (UID: \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\") " pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.603392 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqk6x\" (UniqueName: \"kubernetes.io/projected/3e3f2378-7abe-45a0-9099-b1f299c2df01-kube-api-access-pqk6x\") pod \"nova-cell1-db-create-7cmlv\" (UID: \"3e3f2378-7abe-45a0-9099-b1f299c2df01\") " pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.603434 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b6fba8-c961-4b52-b97f-8189df4a5339-operator-scripts\") pod \"nova-api-db-create-fsw9d\" (UID: \"18b6fba8-c961-4b52-b97f-8189df4a5339\") " pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.603483 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws98r\" (UniqueName: \"kubernetes.io/projected/3fc8138c-cb56-4072-a2cd-709f7feee5af-kube-api-access-ws98r\") pod \"nova-cell0-db-create-wkcxw\" (UID: \"3fc8138c-cb56-4072-a2cd-709f7feee5af\") " pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.605257 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b6fba8-c961-4b52-b97f-8189df4a5339-operator-scripts\") pod \"nova-api-db-create-fsw9d\" (UID: \"18b6fba8-c961-4b52-b97f-8189df4a5339\") " pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.605284 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc8138c-cb56-4072-a2cd-709f7feee5af-operator-scripts\") pod \"nova-cell0-db-create-wkcxw\" (UID: \"3fc8138c-cb56-4072-a2cd-709f7feee5af\") " pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.633986 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-29c8-account-create-update-pmcx2"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.635335 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.642098 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.646032 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-29c8-account-create-update-pmcx2"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.647826 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws98r\" (UniqueName: \"kubernetes.io/projected/3fc8138c-cb56-4072-a2cd-709f7feee5af-kube-api-access-ws98r\") pod \"nova-cell0-db-create-wkcxw\" (UID: \"3fc8138c-cb56-4072-a2cd-709f7feee5af\") " pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.657834 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5ssk\" (UniqueName: \"kubernetes.io/projected/18b6fba8-c961-4b52-b97f-8189df4a5339-kube-api-access-m5ssk\") pod \"nova-api-db-create-fsw9d\" (UID: \"18b6fba8-c961-4b52-b97f-8189df4a5339\") " pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.675536 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.711182 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhhs\" (UniqueName: \"kubernetes.io/projected/139d4606-a92c-4c83-9291-9aa71c0715e1-kube-api-access-7rhhs\") pod \"nova-cell0-29c8-account-create-update-pmcx2\" (UID: \"139d4606-a92c-4c83-9291-9aa71c0715e1\") " pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.711292 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/139d4606-a92c-4c83-9291-9aa71c0715e1-operator-scripts\") pod \"nova-cell0-29c8-account-create-update-pmcx2\" (UID: \"139d4606-a92c-4c83-9291-9aa71c0715e1\") " pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.711349 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f2378-7abe-45a0-9099-b1f299c2df01-operator-scripts\") pod \"nova-cell1-db-create-7cmlv\" (UID: \"3e3f2378-7abe-45a0-9099-b1f299c2df01\") " pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.711384 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq5l6\" (UniqueName: \"kubernetes.io/projected/dac0977e-6a7b-4218-bb8d-409dd7c4732e-kube-api-access-bq5l6\") pod \"nova-api-8580-account-create-update-qk6lj\" (UID: \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\") " pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.711411 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac0977e-6a7b-4218-bb8d-409dd7c4732e-operator-scripts\") pod \"nova-api-8580-account-create-update-qk6lj\" (UID: \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\") " pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.711434 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqk6x\" (UniqueName: \"kubernetes.io/projected/3e3f2378-7abe-45a0-9099-b1f299c2df01-kube-api-access-pqk6x\") pod \"nova-cell1-db-create-7cmlv\" (UID: \"3e3f2378-7abe-45a0-9099-b1f299c2df01\") " pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.712833 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f2378-7abe-45a0-9099-b1f299c2df01-operator-scripts\") pod \"nova-cell1-db-create-7cmlv\" (UID: \"3e3f2378-7abe-45a0-9099-b1f299c2df01\") " pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.713107 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac0977e-6a7b-4218-bb8d-409dd7c4732e-operator-scripts\") pod \"nova-api-8580-account-create-update-qk6lj\" (UID: \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\") " pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.733995 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqk6x\" (UniqueName: \"kubernetes.io/projected/3e3f2378-7abe-45a0-9099-b1f299c2df01-kube-api-access-pqk6x\") pod \"nova-cell1-db-create-7cmlv\" (UID: \"3e3f2378-7abe-45a0-9099-b1f299c2df01\") " pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.734755 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq5l6\" (UniqueName: \"kubernetes.io/projected/dac0977e-6a7b-4218-bb8d-409dd7c4732e-kube-api-access-bq5l6\") pod \"nova-api-8580-account-create-update-qk6lj\" (UID: \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\") " pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.746852 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.779502 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.813335 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/139d4606-a92c-4c83-9291-9aa71c0715e1-operator-scripts\") pod \"nova-cell0-29c8-account-create-update-pmcx2\" (UID: \"139d4606-a92c-4c83-9291-9aa71c0715e1\") " pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.813471 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhhs\" (UniqueName: \"kubernetes.io/projected/139d4606-a92c-4c83-9291-9aa71c0715e1-kube-api-access-7rhhs\") pod \"nova-cell0-29c8-account-create-update-pmcx2\" (UID: \"139d4606-a92c-4c83-9291-9aa71c0715e1\") " pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.815411 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/139d4606-a92c-4c83-9291-9aa71c0715e1-operator-scripts\") pod \"nova-cell0-29c8-account-create-update-pmcx2\" (UID: \"139d4606-a92c-4c83-9291-9aa71c0715e1\") " pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.825319 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0f3c-account-create-update-zj7d8"] Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.827847 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.830614 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.837392 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhhs\" (UniqueName: \"kubernetes.io/projected/139d4606-a92c-4c83-9291-9aa71c0715e1-kube-api-access-7rhhs\") pod \"nova-cell0-29c8-account-create-update-pmcx2\" (UID: \"139d4606-a92c-4c83-9291-9aa71c0715e1\") " pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.882554 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.896855 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.923250 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/ab7a8005-894f-45b0-99bd-50bb7be1be48-kube-api-access-qjtfz\") pod \"nova-cell1-0f3c-account-create-update-zj7d8\" (UID: \"ab7a8005-894f-45b0-99bd-50bb7be1be48\") " pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.923603 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7a8005-894f-45b0-99bd-50bb7be1be48-operator-scripts\") pod \"nova-cell1-0f3c-account-create-update-zj7d8\" (UID: \"ab7a8005-894f-45b0-99bd-50bb7be1be48\") " pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:03 crc kubenswrapper[4834]: I0121 16:05:03.976887 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0f3c-account-create-update-zj7d8"] Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.027847 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/ab7a8005-894f-45b0-99bd-50bb7be1be48-kube-api-access-qjtfz\") pod \"nova-cell1-0f3c-account-create-update-zj7d8\" (UID: \"ab7a8005-894f-45b0-99bd-50bb7be1be48\") " pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.028006 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7a8005-894f-45b0-99bd-50bb7be1be48-operator-scripts\") pod \"nova-cell1-0f3c-account-create-update-zj7d8\" (UID: \"ab7a8005-894f-45b0-99bd-50bb7be1be48\") " pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.029500 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7a8005-894f-45b0-99bd-50bb7be1be48-operator-scripts\") pod \"nova-cell1-0f3c-account-create-update-zj7d8\" (UID: \"ab7a8005-894f-45b0-99bd-50bb7be1be48\") " pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.053421 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/ab7a8005-894f-45b0-99bd-50bb7be1be48-kube-api-access-qjtfz\") pod \"nova-cell1-0f3c-account-create-update-zj7d8\" (UID: \"ab7a8005-894f-45b0-99bd-50bb7be1be48\") " pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.208227 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.431228 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fsw9d"] Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.442710 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8580-account-create-update-qk6lj"] Jan 21 16:05:04 crc kubenswrapper[4834]: W0121 16:05:04.444349 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddac0977e_6a7b_4218_bb8d_409dd7c4732e.slice/crio-5603d305bdfa402d986781feb7a506c5a047c1e68cfb9bb95cd4d7999243fd9d WatchSource:0}: Error finding container 5603d305bdfa402d986781feb7a506c5a047c1e68cfb9bb95cd4d7999243fd9d: Status 404 returned error can't find the container with id 5603d305bdfa402d986781feb7a506c5a047c1e68cfb9bb95cd4d7999243fd9d Jan 21 16:05:04 crc kubenswrapper[4834]: W0121 16:05:04.598466 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc8138c_cb56_4072_a2cd_709f7feee5af.slice/crio-cbc1c0a1604e7868ada452bd6a66da95a9102338fd9074fe566d95dfd467f9c2 WatchSource:0}: Error finding container cbc1c0a1604e7868ada452bd6a66da95a9102338fd9074fe566d95dfd467f9c2: Status 404 returned error can't find the container with id cbc1c0a1604e7868ada452bd6a66da95a9102338fd9074fe566d95dfd467f9c2 Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.607057 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wkcxw"] Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.676909 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7cmlv"] Jan 21 16:05:04 crc kubenswrapper[4834]: W0121 16:05:04.686223 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e3f2378_7abe_45a0_9099_b1f299c2df01.slice/crio-f3e61a867bef84c4a36001c451c1f2377c6a667949d1a921bf1f7cbd6ec4308e WatchSource:0}: Error finding container f3e61a867bef84c4a36001c451c1f2377c6a667949d1a921bf1f7cbd6ec4308e: Status 404 returned error can't find the container with id f3e61a867bef84c4a36001c451c1f2377c6a667949d1a921bf1f7cbd6ec4308e Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.693767 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-29c8-account-create-update-pmcx2"] Jan 21 16:05:04 crc kubenswrapper[4834]: W0121 16:05:04.703233 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod139d4606_a92c_4c83_9291_9aa71c0715e1.slice/crio-3701fb17ed84e71aa759cfa7cdc77173c8629f8b910b0b0eec1824e76475a569 WatchSource:0}: Error finding container 3701fb17ed84e71aa759cfa7cdc77173c8629f8b910b0b0eec1824e76475a569: Status 404 returned error can't find the container with id 3701fb17ed84e71aa759cfa7cdc77173c8629f8b910b0b0eec1824e76475a569 Jan 21 16:05:04 crc kubenswrapper[4834]: I0121 16:05:04.808027 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0f3c-account-create-update-zj7d8"] Jan 21 16:05:04 crc kubenswrapper[4834]: W0121 16:05:04.825026 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab7a8005_894f_45b0_99bd_50bb7be1be48.slice/crio-40bb1fd3e20fc6180b3a43f7999a0c5e33e40d4d1bd88ed95114706452f7c441 WatchSource:0}: Error finding container 40bb1fd3e20fc6180b3a43f7999a0c5e33e40d4d1bd88ed95114706452f7c441: Status 404 returned error can't find the container with id 40bb1fd3e20fc6180b3a43f7999a0c5e33e40d4d1bd88ed95114706452f7c441 Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.291389 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" event={"ID":"139d4606-a92c-4c83-9291-9aa71c0715e1","Type":"ContainerStarted","Data":"813d4685cd37b2c6d05baf31f97e3b030923410e8491a86df47d3bc90890a1a9"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.291472 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" event={"ID":"139d4606-a92c-4c83-9291-9aa71c0715e1","Type":"ContainerStarted","Data":"3701fb17ed84e71aa759cfa7cdc77173c8629f8b910b0b0eec1824e76475a569"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.294613 4834 generic.go:334] "Generic (PLEG): container finished" podID="18b6fba8-c961-4b52-b97f-8189df4a5339" containerID="8da0c035eba0f37fbf36b2f9b877617e33bbf2afa4bd84c302ffdc13b7e5c96c" exitCode=0 Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.294676 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fsw9d" event={"ID":"18b6fba8-c961-4b52-b97f-8189df4a5339","Type":"ContainerDied","Data":"8da0c035eba0f37fbf36b2f9b877617e33bbf2afa4bd84c302ffdc13b7e5c96c"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.294742 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fsw9d" event={"ID":"18b6fba8-c961-4b52-b97f-8189df4a5339","Type":"ContainerStarted","Data":"2203472075dd0875c44f7fde5791fd1781e320c4fa07879045e30f8d167b4e15"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.297159 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wkcxw" event={"ID":"3fc8138c-cb56-4072-a2cd-709f7feee5af","Type":"ContainerStarted","Data":"df8dddb87bfaa7e48780b1ed47943d85bc9046aa18825f30c8e3d4e55174f84a"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.297205 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wkcxw" event={"ID":"3fc8138c-cb56-4072-a2cd-709f7feee5af","Type":"ContainerStarted","Data":"cbc1c0a1604e7868ada452bd6a66da95a9102338fd9074fe566d95dfd467f9c2"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.299157 4834 generic.go:334] "Generic (PLEG): container finished" podID="dac0977e-6a7b-4218-bb8d-409dd7c4732e" containerID="dccabb73344b6d4b4832eee08f46cd0974fb2571dd940533e757d10af8253300" exitCode=0 Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.299209 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8580-account-create-update-qk6lj" event={"ID":"dac0977e-6a7b-4218-bb8d-409dd7c4732e","Type":"ContainerDied","Data":"dccabb73344b6d4b4832eee08f46cd0974fb2571dd940533e757d10af8253300"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.299240 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8580-account-create-update-qk6lj" event={"ID":"dac0977e-6a7b-4218-bb8d-409dd7c4732e","Type":"ContainerStarted","Data":"5603d305bdfa402d986781feb7a506c5a047c1e68cfb9bb95cd4d7999243fd9d"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.301260 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7cmlv" event={"ID":"3e3f2378-7abe-45a0-9099-b1f299c2df01","Type":"ContainerStarted","Data":"c284d50a2b5f867a9fef785cb6f1595b0dae76ce230609fe5e58bbc66d872e79"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.301291 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7cmlv" event={"ID":"3e3f2378-7abe-45a0-9099-b1f299c2df01","Type":"ContainerStarted","Data":"f3e61a867bef84c4a36001c451c1f2377c6a667949d1a921bf1f7cbd6ec4308e"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.303901 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" event={"ID":"ab7a8005-894f-45b0-99bd-50bb7be1be48","Type":"ContainerStarted","Data":"d1da306fecbc7856a5db3cbf8d2b86808f604e286b30d768798a78674018c731"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.303957 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" event={"ID":"ab7a8005-894f-45b0-99bd-50bb7be1be48","Type":"ContainerStarted","Data":"40bb1fd3e20fc6180b3a43f7999a0c5e33e40d4d1bd88ed95114706452f7c441"} Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.317457 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" podStartSLOduration=2.3174315979999998 podStartE2EDuration="2.317431598s" podCreationTimestamp="2026-01-21 16:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:05.309398588 +0000 UTC m=+5651.283747643" watchObservedRunningTime="2026-01-21 16:05:05.317431598 +0000 UTC m=+5651.291780643" Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.342506 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-7cmlv" podStartSLOduration=2.34248429 podStartE2EDuration="2.34248429s" podCreationTimestamp="2026-01-21 16:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:05.337996229 +0000 UTC m=+5651.312345294" watchObservedRunningTime="2026-01-21 16:05:05.34248429 +0000 UTC m=+5651.316833335" Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.366553 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" podStartSLOduration=2.36653082 podStartE2EDuration="2.36653082s" podCreationTimestamp="2026-01-21 16:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:05.355652011 +0000 UTC m=+5651.330001056" watchObservedRunningTime="2026-01-21 16:05:05.36653082 +0000 UTC m=+5651.340879865" Jan 21 16:05:05 crc kubenswrapper[4834]: I0121 16:05:05.391432 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-wkcxw" podStartSLOduration=2.391399126 podStartE2EDuration="2.391399126s" podCreationTimestamp="2026-01-21 16:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:05.390025914 +0000 UTC m=+5651.364374969" watchObservedRunningTime="2026-01-21 16:05:05.391399126 +0000 UTC m=+5651.365748171" Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.317565 4834 generic.go:334] "Generic (PLEG): container finished" podID="3fc8138c-cb56-4072-a2cd-709f7feee5af" containerID="df8dddb87bfaa7e48780b1ed47943d85bc9046aa18825f30c8e3d4e55174f84a" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.317683 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wkcxw" event={"ID":"3fc8138c-cb56-4072-a2cd-709f7feee5af","Type":"ContainerDied","Data":"df8dddb87bfaa7e48780b1ed47943d85bc9046aa18825f30c8e3d4e55174f84a"} Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.329894 4834 generic.go:334] "Generic (PLEG): container finished" podID="3e3f2378-7abe-45a0-9099-b1f299c2df01" containerID="c284d50a2b5f867a9fef785cb6f1595b0dae76ce230609fe5e58bbc66d872e79" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.336025 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7cmlv" event={"ID":"3e3f2378-7abe-45a0-9099-b1f299c2df01","Type":"ContainerDied","Data":"c284d50a2b5f867a9fef785cb6f1595b0dae76ce230609fe5e58bbc66d872e79"} Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.343665 4834 generic.go:334] "Generic (PLEG): container finished" podID="ab7a8005-894f-45b0-99bd-50bb7be1be48" containerID="d1da306fecbc7856a5db3cbf8d2b86808f604e286b30d768798a78674018c731" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.343810 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" event={"ID":"ab7a8005-894f-45b0-99bd-50bb7be1be48","Type":"ContainerDied","Data":"d1da306fecbc7856a5db3cbf8d2b86808f604e286b30d768798a78674018c731"} Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.353753 4834 generic.go:334] "Generic (PLEG): container finished" podID="139d4606-a92c-4c83-9291-9aa71c0715e1" containerID="813d4685cd37b2c6d05baf31f97e3b030923410e8491a86df47d3bc90890a1a9" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.357018 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" event={"ID":"139d4606-a92c-4c83-9291-9aa71c0715e1","Type":"ContainerDied","Data":"813d4685cd37b2c6d05baf31f97e3b030923410e8491a86df47d3bc90890a1a9"} Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.857168 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.863891 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.920897 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac0977e-6a7b-4218-bb8d-409dd7c4732e-operator-scripts\") pod \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\" (UID: \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\") " Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.920966 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5ssk\" (UniqueName: \"kubernetes.io/projected/18b6fba8-c961-4b52-b97f-8189df4a5339-kube-api-access-m5ssk\") pod \"18b6fba8-c961-4b52-b97f-8189df4a5339\" (UID: \"18b6fba8-c961-4b52-b97f-8189df4a5339\") " Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.921060 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b6fba8-c961-4b52-b97f-8189df4a5339-operator-scripts\") pod \"18b6fba8-c961-4b52-b97f-8189df4a5339\" (UID: \"18b6fba8-c961-4b52-b97f-8189df4a5339\") " Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.921225 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq5l6\" (UniqueName: \"kubernetes.io/projected/dac0977e-6a7b-4218-bb8d-409dd7c4732e-kube-api-access-bq5l6\") pod \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\" (UID: \"dac0977e-6a7b-4218-bb8d-409dd7c4732e\") " Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.927237 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac0977e-6a7b-4218-bb8d-409dd7c4732e-kube-api-access-bq5l6" (OuterVolumeSpecName: "kube-api-access-bq5l6") pod "dac0977e-6a7b-4218-bb8d-409dd7c4732e" (UID: "dac0977e-6a7b-4218-bb8d-409dd7c4732e"). InnerVolumeSpecName "kube-api-access-bq5l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.927636 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac0977e-6a7b-4218-bb8d-409dd7c4732e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dac0977e-6a7b-4218-bb8d-409dd7c4732e" (UID: "dac0977e-6a7b-4218-bb8d-409dd7c4732e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.930273 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b6fba8-c961-4b52-b97f-8189df4a5339-kube-api-access-m5ssk" (OuterVolumeSpecName: "kube-api-access-m5ssk") pod "18b6fba8-c961-4b52-b97f-8189df4a5339" (UID: "18b6fba8-c961-4b52-b97f-8189df4a5339"). InnerVolumeSpecName "kube-api-access-m5ssk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:06 crc kubenswrapper[4834]: I0121 16:05:06.930511 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b6fba8-c961-4b52-b97f-8189df4a5339-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18b6fba8-c961-4b52-b97f-8189df4a5339" (UID: "18b6fba8-c961-4b52-b97f-8189df4a5339"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.023658 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b6fba8-c961-4b52-b97f-8189df4a5339-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.023718 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq5l6\" (UniqueName: \"kubernetes.io/projected/dac0977e-6a7b-4218-bb8d-409dd7c4732e-kube-api-access-bq5l6\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.023732 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac0977e-6a7b-4218-bb8d-409dd7c4732e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.023740 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5ssk\" (UniqueName: \"kubernetes.io/projected/18b6fba8-c961-4b52-b97f-8189df4a5339-kube-api-access-m5ssk\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.363658 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8580-account-create-update-qk6lj" event={"ID":"dac0977e-6a7b-4218-bb8d-409dd7c4732e","Type":"ContainerDied","Data":"5603d305bdfa402d986781feb7a506c5a047c1e68cfb9bb95cd4d7999243fd9d"} Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.363708 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5603d305bdfa402d986781feb7a506c5a047c1e68cfb9bb95cd4d7999243fd9d" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.363820 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8580-account-create-update-qk6lj" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.367227 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fsw9d" event={"ID":"18b6fba8-c961-4b52-b97f-8189df4a5339","Type":"ContainerDied","Data":"2203472075dd0875c44f7fde5791fd1781e320c4fa07879045e30f8d167b4e15"} Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.367292 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2203472075dd0875c44f7fde5791fd1781e320c4fa07879045e30f8d167b4e15" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.367426 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fsw9d" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.698593 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.837209 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws98r\" (UniqueName: \"kubernetes.io/projected/3fc8138c-cb56-4072-a2cd-709f7feee5af-kube-api-access-ws98r\") pod \"3fc8138c-cb56-4072-a2cd-709f7feee5af\" (UID: \"3fc8138c-cb56-4072-a2cd-709f7feee5af\") " Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.837260 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc8138c-cb56-4072-a2cd-709f7feee5af-operator-scripts\") pod \"3fc8138c-cb56-4072-a2cd-709f7feee5af\" (UID: \"3fc8138c-cb56-4072-a2cd-709f7feee5af\") " Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.838354 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc8138c-cb56-4072-a2cd-709f7feee5af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fc8138c-cb56-4072-a2cd-709f7feee5af" (UID: "3fc8138c-cb56-4072-a2cd-709f7feee5af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.845060 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc8138c-cb56-4072-a2cd-709f7feee5af-kube-api-access-ws98r" (OuterVolumeSpecName: "kube-api-access-ws98r") pod "3fc8138c-cb56-4072-a2cd-709f7feee5af" (UID: "3fc8138c-cb56-4072-a2cd-709f7feee5af"). InnerVolumeSpecName "kube-api-access-ws98r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.924457 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.933004 4834 scope.go:117] "RemoveContainer" containerID="09ceb7aa460369583e82475fe67466439c347b7b044b4272cf9354a778e1f523" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.933616 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.939806 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws98r\" (UniqueName: \"kubernetes.io/projected/3fc8138c-cb56-4072-a2cd-709f7feee5af-kube-api-access-ws98r\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.939832 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc8138c-cb56-4072-a2cd-709f7feee5af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.943545 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:07 crc kubenswrapper[4834]: I0121 16:05:07.998099 4834 scope.go:117] "RemoveContainer" containerID="e59a0ac4af384677ce5ae626ec634756b3e6ee6245ef7f15f3b39d3b00becf3e" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.040579 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/ab7a8005-894f-45b0-99bd-50bb7be1be48-kube-api-access-qjtfz\") pod \"ab7a8005-894f-45b0-99bd-50bb7be1be48\" (UID: \"ab7a8005-894f-45b0-99bd-50bb7be1be48\") " Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.041315 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f2378-7abe-45a0-9099-b1f299c2df01-operator-scripts\") pod \"3e3f2378-7abe-45a0-9099-b1f299c2df01\" (UID: \"3e3f2378-7abe-45a0-9099-b1f299c2df01\") " Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.041803 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rhhs\" (UniqueName: \"kubernetes.io/projected/139d4606-a92c-4c83-9291-9aa71c0715e1-kube-api-access-7rhhs\") pod \"139d4606-a92c-4c83-9291-9aa71c0715e1\" (UID: \"139d4606-a92c-4c83-9291-9aa71c0715e1\") " Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.042513 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/139d4606-a92c-4c83-9291-9aa71c0715e1-operator-scripts\") pod \"139d4606-a92c-4c83-9291-9aa71c0715e1\" (UID: \"139d4606-a92c-4c83-9291-9aa71c0715e1\") " Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.042946 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139d4606-a92c-4c83-9291-9aa71c0715e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "139d4606-a92c-4c83-9291-9aa71c0715e1" (UID: "139d4606-a92c-4c83-9291-9aa71c0715e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.043259 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7a8005-894f-45b0-99bd-50bb7be1be48-operator-scripts\") pod \"ab7a8005-894f-45b0-99bd-50bb7be1be48\" (UID: \"ab7a8005-894f-45b0-99bd-50bb7be1be48\") " Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.043500 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqk6x\" (UniqueName: \"kubernetes.io/projected/3e3f2378-7abe-45a0-9099-b1f299c2df01-kube-api-access-pqk6x\") pod \"3e3f2378-7abe-45a0-9099-b1f299c2df01\" (UID: \"3e3f2378-7abe-45a0-9099-b1f299c2df01\") " Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.043793 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3f2378-7abe-45a0-9099-b1f299c2df01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e3f2378-7abe-45a0-9099-b1f299c2df01" (UID: "3e3f2378-7abe-45a0-9099-b1f299c2df01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.044336 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7a8005-894f-45b0-99bd-50bb7be1be48-kube-api-access-qjtfz" (OuterVolumeSpecName: "kube-api-access-qjtfz") pod "ab7a8005-894f-45b0-99bd-50bb7be1be48" (UID: "ab7a8005-894f-45b0-99bd-50bb7be1be48"). InnerVolumeSpecName "kube-api-access-qjtfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.044465 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7a8005-894f-45b0-99bd-50bb7be1be48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab7a8005-894f-45b0-99bd-50bb7be1be48" (UID: "ab7a8005-894f-45b0-99bd-50bb7be1be48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.045657 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/139d4606-a92c-4c83-9291-9aa71c0715e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.046260 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab7a8005-894f-45b0-99bd-50bb7be1be48-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.046381 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/ab7a8005-894f-45b0-99bd-50bb7be1be48-kube-api-access-qjtfz\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.046805 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3f2378-7abe-45a0-9099-b1f299c2df01-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.045764 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139d4606-a92c-4c83-9291-9aa71c0715e1-kube-api-access-7rhhs" (OuterVolumeSpecName: "kube-api-access-7rhhs") pod "139d4606-a92c-4c83-9291-9aa71c0715e1" (UID: "139d4606-a92c-4c83-9291-9aa71c0715e1"). InnerVolumeSpecName "kube-api-access-7rhhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.049365 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3f2378-7abe-45a0-9099-b1f299c2df01-kube-api-access-pqk6x" (OuterVolumeSpecName: "kube-api-access-pqk6x") pod "3e3f2378-7abe-45a0-9099-b1f299c2df01" (UID: "3e3f2378-7abe-45a0-9099-b1f299c2df01"). InnerVolumeSpecName "kube-api-access-pqk6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.149375 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rhhs\" (UniqueName: \"kubernetes.io/projected/139d4606-a92c-4c83-9291-9aa71c0715e1-kube-api-access-7rhhs\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.150009 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqk6x\" (UniqueName: \"kubernetes.io/projected/3e3f2378-7abe-45a0-9099-b1f299c2df01-kube-api-access-pqk6x\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.396018 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" event={"ID":"ab7a8005-894f-45b0-99bd-50bb7be1be48","Type":"ContainerDied","Data":"40bb1fd3e20fc6180b3a43f7999a0c5e33e40d4d1bd88ed95114706452f7c441"} Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.396055 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0f3c-account-create-update-zj7d8" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.396084 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40bb1fd3e20fc6180b3a43f7999a0c5e33e40d4d1bd88ed95114706452f7c441" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.399679 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" event={"ID":"139d4606-a92c-4c83-9291-9aa71c0715e1","Type":"ContainerDied","Data":"3701fb17ed84e71aa759cfa7cdc77173c8629f8b910b0b0eec1824e76475a569"} Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.399766 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3701fb17ed84e71aa759cfa7cdc77173c8629f8b910b0b0eec1824e76475a569" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.399893 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29c8-account-create-update-pmcx2" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.403760 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wkcxw" event={"ID":"3fc8138c-cb56-4072-a2cd-709f7feee5af","Type":"ContainerDied","Data":"cbc1c0a1604e7868ada452bd6a66da95a9102338fd9074fe566d95dfd467f9c2"} Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.403795 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbc1c0a1604e7868ada452bd6a66da95a9102338fd9074fe566d95dfd467f9c2" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.403863 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wkcxw" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.408502 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7cmlv" event={"ID":"3e3f2378-7abe-45a0-9099-b1f299c2df01","Type":"ContainerDied","Data":"f3e61a867bef84c4a36001c451c1f2377c6a667949d1a921bf1f7cbd6ec4308e"} Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.408575 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3e61a867bef84c4a36001c451c1f2377c6a667949d1a921bf1f7cbd6ec4308e" Jan 21 16:05:08 crc kubenswrapper[4834]: I0121 16:05:08.408584 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7cmlv" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.939415 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rpx4c"] Jan 21 16:05:13 crc kubenswrapper[4834]: E0121 16:05:13.940444 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b6fba8-c961-4b52-b97f-8189df4a5339" containerName="mariadb-database-create" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940465 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b6fba8-c961-4b52-b97f-8189df4a5339" containerName="mariadb-database-create" Jan 21 16:05:13 crc kubenswrapper[4834]: E0121 16:05:13.940491 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc8138c-cb56-4072-a2cd-709f7feee5af" containerName="mariadb-database-create" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940500 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc8138c-cb56-4072-a2cd-709f7feee5af" containerName="mariadb-database-create" Jan 21 16:05:13 crc kubenswrapper[4834]: E0121 16:05:13.940520 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7a8005-894f-45b0-99bd-50bb7be1be48" containerName="mariadb-account-create-update" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940528 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7a8005-894f-45b0-99bd-50bb7be1be48" containerName="mariadb-account-create-update" Jan 21 16:05:13 crc kubenswrapper[4834]: E0121 16:05:13.940550 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac0977e-6a7b-4218-bb8d-409dd7c4732e" containerName="mariadb-account-create-update" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940558 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac0977e-6a7b-4218-bb8d-409dd7c4732e" containerName="mariadb-account-create-update" Jan 21 16:05:13 crc kubenswrapper[4834]: E0121 16:05:13.940573 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139d4606-a92c-4c83-9291-9aa71c0715e1" containerName="mariadb-account-create-update" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940582 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="139d4606-a92c-4c83-9291-9aa71c0715e1" containerName="mariadb-account-create-update" Jan 21 16:05:13 crc kubenswrapper[4834]: E0121 16:05:13.940604 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3f2378-7abe-45a0-9099-b1f299c2df01" containerName="mariadb-database-create" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940612 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3f2378-7abe-45a0-9099-b1f299c2df01" containerName="mariadb-database-create" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940801 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="139d4606-a92c-4c83-9291-9aa71c0715e1" containerName="mariadb-account-create-update" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940815 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b6fba8-c961-4b52-b97f-8189df4a5339" containerName="mariadb-database-create" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940830 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3f2378-7abe-45a0-9099-b1f299c2df01" containerName="mariadb-database-create" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940850 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7a8005-894f-45b0-99bd-50bb7be1be48" containerName="mariadb-account-create-update" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940865 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc8138c-cb56-4072-a2cd-709f7feee5af" containerName="mariadb-database-create" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.940877 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac0977e-6a7b-4218-bb8d-409dd7c4732e" containerName="mariadb-account-create-update" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.941871 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.944576 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.944826 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.945276 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-x6fzm" Jan 21 16:05:13 crc kubenswrapper[4834]: I0121 16:05:13.956506 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rpx4c"] Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.069737 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-scripts\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.069816 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.070333 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-config-data\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.070423 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqkqs\" (UniqueName: \"kubernetes.io/projected/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-kube-api-access-hqkqs\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.172492 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-scripts\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.172586 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.172680 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-config-data\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.172717 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqkqs\" (UniqueName: \"kubernetes.io/projected/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-kube-api-access-hqkqs\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.179383 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-scripts\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.179422 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.184912 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-config-data\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.199728 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqkqs\" (UniqueName: \"kubernetes.io/projected/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-kube-api-access-hqkqs\") pod \"nova-cell0-conductor-db-sync-rpx4c\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.267337 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:14 crc kubenswrapper[4834]: I0121 16:05:14.801247 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rpx4c"] Jan 21 16:05:14 crc kubenswrapper[4834]: W0121 16:05:14.806251 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83ffe6c6_2260_4d10_9ffb_7a3bc7286f58.slice/crio-f8f1fb1d5cbac581c828a7c8b09c6f54fd9c25d6353117daebe5d1a6e2e0e353 WatchSource:0}: Error finding container f8f1fb1d5cbac581c828a7c8b09c6f54fd9c25d6353117daebe5d1a6e2e0e353: Status 404 returned error can't find the container with id f8f1fb1d5cbac581c828a7c8b09c6f54fd9c25d6353117daebe5d1a6e2e0e353 Jan 21 16:05:15 crc kubenswrapper[4834]: I0121 16:05:15.503087 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rpx4c" event={"ID":"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58","Type":"ContainerStarted","Data":"f45e243a4d91819ac1c5ada9ccc0853f8c14f5e498877660edeb9e9432136409"} Jan 21 16:05:15 crc kubenswrapper[4834]: I0121 16:05:15.503630 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rpx4c" event={"ID":"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58","Type":"ContainerStarted","Data":"f8f1fb1d5cbac581c828a7c8b09c6f54fd9c25d6353117daebe5d1a6e2e0e353"} Jan 21 16:05:17 crc kubenswrapper[4834]: I0121 16:05:17.113944 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:05:17 crc kubenswrapper[4834]: I0121 16:05:17.114007 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:05:20 crc kubenswrapper[4834]: I0121 16:05:20.573721 4834 generic.go:334] "Generic (PLEG): container finished" podID="83ffe6c6-2260-4d10-9ffb-7a3bc7286f58" containerID="f45e243a4d91819ac1c5ada9ccc0853f8c14f5e498877660edeb9e9432136409" exitCode=0 Jan 21 16:05:20 crc kubenswrapper[4834]: I0121 16:05:20.573808 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rpx4c" event={"ID":"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58","Type":"ContainerDied","Data":"f45e243a4d91819ac1c5ada9ccc0853f8c14f5e498877660edeb9e9432136409"} Jan 21 16:05:21 crc kubenswrapper[4834]: I0121 16:05:21.874720 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:21 crc kubenswrapper[4834]: I0121 16:05:21.936611 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-config-data\") pod \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " Jan 21 16:05:21 crc kubenswrapper[4834]: I0121 16:05:21.936680 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqkqs\" (UniqueName: \"kubernetes.io/projected/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-kube-api-access-hqkqs\") pod \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " Jan 21 16:05:21 crc kubenswrapper[4834]: I0121 16:05:21.936828 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-scripts\") pod \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " Jan 21 16:05:21 crc kubenswrapper[4834]: I0121 16:05:21.936867 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-combined-ca-bundle\") pod \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\" (UID: \"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58\") " Jan 21 16:05:21 crc kubenswrapper[4834]: I0121 16:05:21.942751 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-kube-api-access-hqkqs" (OuterVolumeSpecName: "kube-api-access-hqkqs") pod "83ffe6c6-2260-4d10-9ffb-7a3bc7286f58" (UID: "83ffe6c6-2260-4d10-9ffb-7a3bc7286f58"). InnerVolumeSpecName "kube-api-access-hqkqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:21 crc kubenswrapper[4834]: I0121 16:05:21.943983 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-scripts" (OuterVolumeSpecName: "scripts") pod "83ffe6c6-2260-4d10-9ffb-7a3bc7286f58" (UID: "83ffe6c6-2260-4d10-9ffb-7a3bc7286f58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:21 crc kubenswrapper[4834]: I0121 16:05:21.963262 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83ffe6c6-2260-4d10-9ffb-7a3bc7286f58" (UID: "83ffe6c6-2260-4d10-9ffb-7a3bc7286f58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:21 crc kubenswrapper[4834]: I0121 16:05:21.967230 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-config-data" (OuterVolumeSpecName: "config-data") pod "83ffe6c6-2260-4d10-9ffb-7a3bc7286f58" (UID: "83ffe6c6-2260-4d10-9ffb-7a3bc7286f58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.042732 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.042804 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqkqs\" (UniqueName: \"kubernetes.io/projected/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-kube-api-access-hqkqs\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.042818 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.042829 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.595047 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rpx4c" event={"ID":"83ffe6c6-2260-4d10-9ffb-7a3bc7286f58","Type":"ContainerDied","Data":"f8f1fb1d5cbac581c828a7c8b09c6f54fd9c25d6353117daebe5d1a6e2e0e353"} Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.595092 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8f1fb1d5cbac581c828a7c8b09c6f54fd9c25d6353117daebe5d1a6e2e0e353" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.595114 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rpx4c" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.695052 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:05:22 crc kubenswrapper[4834]: E0121 16:05:22.696229 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ffe6c6-2260-4d10-9ffb-7a3bc7286f58" containerName="nova-cell0-conductor-db-sync" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.696264 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ffe6c6-2260-4d10-9ffb-7a3bc7286f58" containerName="nova-cell0-conductor-db-sync" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.696596 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ffe6c6-2260-4d10-9ffb-7a3bc7286f58" containerName="nova-cell0-conductor-db-sync" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.697541 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.699879 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.700407 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-x6fzm" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.708409 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.755445 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.755608 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z96k\" (UniqueName: \"kubernetes.io/projected/9f448b4f-7224-48bd-8311-ba7ab9b018d7-kube-api-access-4z96k\") pod \"nova-cell0-conductor-0\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.755798 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.857242 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z96k\" (UniqueName: \"kubernetes.io/projected/9f448b4f-7224-48bd-8311-ba7ab9b018d7-kube-api-access-4z96k\") pod \"nova-cell0-conductor-0\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.857377 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.857436 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.862470 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.862710 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:22 crc kubenswrapper[4834]: I0121 16:05:22.874543 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z96k\" (UniqueName: \"kubernetes.io/projected/9f448b4f-7224-48bd-8311-ba7ab9b018d7-kube-api-access-4z96k\") pod \"nova-cell0-conductor-0\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:23 crc kubenswrapper[4834]: I0121 16:05:23.017362 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:23 crc kubenswrapper[4834]: I0121 16:05:23.502486 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:05:23 crc kubenswrapper[4834]: I0121 16:05:23.609677 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f448b4f-7224-48bd-8311-ba7ab9b018d7","Type":"ContainerStarted","Data":"88fac1af5322517e74e40ead2752f009cc3a7b66a7f3cdc043a5526d15814ce9"} Jan 21 16:05:24 crc kubenswrapper[4834]: I0121 16:05:24.620046 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f448b4f-7224-48bd-8311-ba7ab9b018d7","Type":"ContainerStarted","Data":"347b321015cdc5d6d9d23e7abae86039e9e9c0fb8be019ca447bbb4da6c01729"} Jan 21 16:05:24 crc kubenswrapper[4834]: I0121 16:05:24.621487 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.046221 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.071129 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=6.071101236 podStartE2EDuration="6.071101236s" podCreationTimestamp="2026-01-21 16:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:24.64723432 +0000 UTC m=+5670.621583395" watchObservedRunningTime="2026-01-21 16:05:28.071101236 +0000 UTC m=+5674.045450281" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.522893 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-f57q8"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.524147 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.532139 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.533077 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.534340 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f57q8"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.582605 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.582698 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-config-data\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.582775 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-scripts\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.582818 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wvcr\" (UniqueName: \"kubernetes.io/projected/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-kube-api-access-8wvcr\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.681271 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.684528 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-config-data\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.684673 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-scripts\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.684715 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wvcr\" (UniqueName: \"kubernetes.io/projected/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-kube-api-access-8wvcr\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.684780 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.685371 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.691333 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.696467 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-scripts\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.697192 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.698062 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-config-data\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.699717 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.707989 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.709156 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.715795 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.724765 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.729588 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wvcr\" (UniqueName: \"kubernetes.io/projected/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-kube-api-access-8wvcr\") pod \"nova-cell0-cell-mapping-f57q8\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.775611 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.777112 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.781171 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.786029 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-config-data\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.786303 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvghs\" (UniqueName: \"kubernetes.io/projected/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-kube-api-access-hvghs\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.786421 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.786518 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.786632 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.786805 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.786945 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cgvl\" (UniqueName: \"kubernetes.io/projected/642eadf4-e07e-4a88-9f26-195713f66f79-kube-api-access-7cgvl\") pod \"nova-cell1-novncproxy-0\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.787042 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbm6t\" (UniqueName: \"kubernetes.io/projected/b157ad12-d8a6-4d56-b797-6515235c7d60-kube-api-access-zbm6t\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.787127 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-config-data\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.787430 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-logs\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.787541 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b157ad12-d8a6-4d56-b797-6515235c7d60-logs\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.834569 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.892344 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.895646 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.896616 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cgvl\" (UniqueName: \"kubernetes.io/projected/642eadf4-e07e-4a88-9f26-195713f66f79-kube-api-access-7cgvl\") pod \"nova-cell1-novncproxy-0\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.896746 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbm6t\" (UniqueName: \"kubernetes.io/projected/b157ad12-d8a6-4d56-b797-6515235c7d60-kube-api-access-zbm6t\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.896835 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-config-data\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.896967 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-logs\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.897055 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b157ad12-d8a6-4d56-b797-6515235c7d60-logs\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.897160 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-config-data\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.897419 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvghs\" (UniqueName: \"kubernetes.io/projected/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-kube-api-access-hvghs\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.909581 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.909785 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-config-data\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.903214 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.910026 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.914058 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.911276 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.909673 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-logs\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.916083 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b157ad12-d8a6-4d56-b797-6515235c7d60-logs\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.916264 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-config-data\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.930337 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.938776 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66d88fc77c-5d4pt"] Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.940412 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.956514 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvghs\" (UniqueName: \"kubernetes.io/projected/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-kube-api-access-hvghs\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.957802 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cgvl\" (UniqueName: \"kubernetes.io/projected/642eadf4-e07e-4a88-9f26-195713f66f79-kube-api-access-7cgvl\") pod \"nova-cell1-novncproxy-0\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.960601 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.961335 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " pod="openstack/nova-api-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.967869 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.968429 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.971329 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbm6t\" (UniqueName: \"kubernetes.io/projected/b157ad12-d8a6-4d56-b797-6515235c7d60-kube-api-access-zbm6t\") pod \"nova-metadata-0\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " pod="openstack/nova-metadata-0" Jan 21 16:05:28 crc kubenswrapper[4834]: I0121 16:05:28.989507 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.021066 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d88fc77c-5d4pt"] Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.025134 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-nb\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.025375 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-dns-svc\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.025496 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.025570 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-config\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.025681 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99j4g\" (UniqueName: \"kubernetes.io/projected/5b1de62e-e18b-4d18-a6d8-152df459d792-kube-api-access-99j4g\") pod \"nova-scheduler-0\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.025876 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-config-data\") pod \"nova-scheduler-0\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.026425 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqrp\" (UniqueName: \"kubernetes.io/projected/189e8ec9-d686-452f-8094-9f8c3433a638-kube-api-access-rfqrp\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.026573 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-sb\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.103456 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.123746 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.128825 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-config\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.128887 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99j4g\" (UniqueName: \"kubernetes.io/projected/5b1de62e-e18b-4d18-a6d8-152df459d792-kube-api-access-99j4g\") pod \"nova-scheduler-0\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.128996 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-config-data\") pod \"nova-scheduler-0\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.129091 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfqrp\" (UniqueName: \"kubernetes.io/projected/189e8ec9-d686-452f-8094-9f8c3433a638-kube-api-access-rfqrp\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.129130 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-sb\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.129186 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-nb\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.129220 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-dns-svc\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.129244 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.132151 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-config\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.133510 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-sb\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.133851 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-nb\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.135316 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-dns-svc\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.139251 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-config-data\") pod \"nova-scheduler-0\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.139763 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.153176 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99j4g\" (UniqueName: \"kubernetes.io/projected/5b1de62e-e18b-4d18-a6d8-152df459d792-kube-api-access-99j4g\") pod \"nova-scheduler-0\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.153262 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.157602 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfqrp\" (UniqueName: \"kubernetes.io/projected/189e8ec9-d686-452f-8094-9f8c3433a638-kube-api-access-rfqrp\") pod \"dnsmasq-dns-66d88fc77c-5d4pt\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.386387 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.397409 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.459498 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f57q8"] Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.515411 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2pbrd"] Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.517553 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.522176 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.528689 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2pbrd"] Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.529615 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.639233 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkh6w\" (UniqueName: \"kubernetes.io/projected/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-kube-api-access-mkh6w\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.639282 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.639548 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-config-data\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.639672 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-scripts\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.678956 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f57q8" event={"ID":"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea","Type":"ContainerStarted","Data":"61925c178d1d583f47126f757613eff4d71804ebea2f41d521969e8c201cbd7d"} Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.708400 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:05:29 crc kubenswrapper[4834]: W0121 16:05:29.709106 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod642eadf4_e07e_4a88_9f26_195713f66f79.slice/crio-3980055a1ad2da5e1fabea7d0a1b6bbb0307fbc2f3fdda0861b8a2ec2f7a41db WatchSource:0}: Error finding container 3980055a1ad2da5e1fabea7d0a1b6bbb0307fbc2f3fdda0861b8a2ec2f7a41db: Status 404 returned error can't find the container with id 3980055a1ad2da5e1fabea7d0a1b6bbb0307fbc2f3fdda0861b8a2ec2f7a41db Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.723497 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.743282 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkh6w\" (UniqueName: \"kubernetes.io/projected/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-kube-api-access-mkh6w\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.743348 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.743413 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-config-data\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.743459 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-scripts\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.748583 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-scripts\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.752074 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-config-data\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.752999 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.763468 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkh6w\" (UniqueName: \"kubernetes.io/projected/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-kube-api-access-mkh6w\") pod \"nova-cell1-conductor-db-sync-2pbrd\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.813448 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:29 crc kubenswrapper[4834]: I0121 16:05:29.851441 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.026540 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:30 crc kubenswrapper[4834]: W0121 16:05:30.039770 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b1de62e_e18b_4d18_a6d8_152df459d792.slice/crio-8f6ecabe7a7d3a7795ba2e4f0a9c6f3898250b4f79e87291e2f92a93745d641e WatchSource:0}: Error finding container 8f6ecabe7a7d3a7795ba2e4f0a9c6f3898250b4f79e87291e2f92a93745d641e: Status 404 returned error can't find the container with id 8f6ecabe7a7d3a7795ba2e4f0a9c6f3898250b4f79e87291e2f92a93745d641e Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.142465 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d88fc77c-5d4pt"] Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.357999 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2pbrd"] Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.691680 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f57q8" event={"ID":"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea","Type":"ContainerStarted","Data":"ff6ba458afbbf582c7ab171632a4b753819907f5438ec518b7b5087c186e07e6"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.694472 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d533eb86-adfa-42c8-aabd-6dfd4a6207e9","Type":"ContainerStarted","Data":"fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.694527 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d533eb86-adfa-42c8-aabd-6dfd4a6207e9","Type":"ContainerStarted","Data":"5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.694537 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d533eb86-adfa-42c8-aabd-6dfd4a6207e9","Type":"ContainerStarted","Data":"21453dd3b20b75b3df7e1f33c99b7fc9247ad319fcf03517418707f18b5a13c2"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.696096 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"642eadf4-e07e-4a88-9f26-195713f66f79","Type":"ContainerStarted","Data":"3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.696127 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"642eadf4-e07e-4a88-9f26-195713f66f79","Type":"ContainerStarted","Data":"3980055a1ad2da5e1fabea7d0a1b6bbb0307fbc2f3fdda0861b8a2ec2f7a41db"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.699008 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b157ad12-d8a6-4d56-b797-6515235c7d60","Type":"ContainerStarted","Data":"ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.699044 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b157ad12-d8a6-4d56-b797-6515235c7d60","Type":"ContainerStarted","Data":"2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.699055 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b157ad12-d8a6-4d56-b797-6515235c7d60","Type":"ContainerStarted","Data":"6ec294e440f1900b900dd1fe3a94a3bbbbc98831861fb775c286d2ad0433c6d6"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.701957 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2pbrd" event={"ID":"b65a3e27-7598-4756-b6d1-7a8a9ad74bec","Type":"ContainerStarted","Data":"09f8eb4f4c0ff627c687af12d1c2d318b7b59ddbef2589e50877945780f52d27"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.701994 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2pbrd" event={"ID":"b65a3e27-7598-4756-b6d1-7a8a9ad74bec","Type":"ContainerStarted","Data":"2764b405f4952d87c22f2fee0d5bbd1d36cac3808b1caa8c6066d5ad28481a4f"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.704815 4834 generic.go:334] "Generic (PLEG): container finished" podID="189e8ec9-d686-452f-8094-9f8c3433a638" containerID="e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb" exitCode=0 Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.705705 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" event={"ID":"189e8ec9-d686-452f-8094-9f8c3433a638","Type":"ContainerDied","Data":"e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.705737 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" event={"ID":"189e8ec9-d686-452f-8094-9f8c3433a638","Type":"ContainerStarted","Data":"54f2d51e50bd5a9245e224d5ee5cd907ab5080721de0b8f59199cd79e5ff56e0"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.710208 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b1de62e-e18b-4d18-a6d8-152df459d792","Type":"ContainerStarted","Data":"e4e0935605c55f9849173406b2e7383147c83882f2958a08728275c73910710c"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.710273 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b1de62e-e18b-4d18-a6d8-152df459d792","Type":"ContainerStarted","Data":"8f6ecabe7a7d3a7795ba2e4f0a9c6f3898250b4f79e87291e2f92a93745d641e"} Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.735731 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-f57q8" podStartSLOduration=2.735708367 podStartE2EDuration="2.735708367s" podCreationTimestamp="2026-01-21 16:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:30.724801417 +0000 UTC m=+5676.699150472" watchObservedRunningTime="2026-01-21 16:05:30.735708367 +0000 UTC m=+5676.710057402" Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.762956 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-2pbrd" podStartSLOduration=1.7629123469999999 podStartE2EDuration="1.762912347s" podCreationTimestamp="2026-01-21 16:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:30.744473961 +0000 UTC m=+5676.718823006" watchObservedRunningTime="2026-01-21 16:05:30.762912347 +0000 UTC m=+5676.737261392" Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.792101 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.792069496 podStartE2EDuration="2.792069496s" podCreationTimestamp="2026-01-21 16:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:30.761797652 +0000 UTC m=+5676.736146697" watchObservedRunningTime="2026-01-21 16:05:30.792069496 +0000 UTC m=+5676.766418551" Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.822394 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.822369642 podStartE2EDuration="2.822369642s" podCreationTimestamp="2026-01-21 16:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:30.780213806 +0000 UTC m=+5676.754562861" watchObservedRunningTime="2026-01-21 16:05:30.822369642 +0000 UTC m=+5676.796718687" Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.844646 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.844618336 podStartE2EDuration="2.844618336s" podCreationTimestamp="2026-01-21 16:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:30.803850044 +0000 UTC m=+5676.778199119" watchObservedRunningTime="2026-01-21 16:05:30.844618336 +0000 UTC m=+5676.818967381" Jan 21 16:05:30 crc kubenswrapper[4834]: I0121 16:05:30.869440 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.869415181 podStartE2EDuration="2.869415181s" podCreationTimestamp="2026-01-21 16:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:30.857614932 +0000 UTC m=+5676.831963987" watchObservedRunningTime="2026-01-21 16:05:30.869415181 +0000 UTC m=+5676.843764226" Jan 21 16:05:31 crc kubenswrapper[4834]: I0121 16:05:31.722411 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" event={"ID":"189e8ec9-d686-452f-8094-9f8c3433a638","Type":"ContainerStarted","Data":"4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2"} Jan 21 16:05:31 crc kubenswrapper[4834]: I0121 16:05:31.754083 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" podStartSLOduration=3.754047367 podStartE2EDuration="3.754047367s" podCreationTimestamp="2026-01-21 16:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:31.748036389 +0000 UTC m=+5677.722385494" watchObservedRunningTime="2026-01-21 16:05:31.754047367 +0000 UTC m=+5677.728396452" Jan 21 16:05:32 crc kubenswrapper[4834]: I0121 16:05:32.734260 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:33 crc kubenswrapper[4834]: E0121 16:05:33.608999 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb65a3e27_7598_4756_b6d1_7a8a9ad74bec.slice/crio-conmon-09f8eb4f4c0ff627c687af12d1c2d318b7b59ddbef2589e50877945780f52d27.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:05:33 crc kubenswrapper[4834]: I0121 16:05:33.745834 4834 generic.go:334] "Generic (PLEG): container finished" podID="b65a3e27-7598-4756-b6d1-7a8a9ad74bec" containerID="09f8eb4f4c0ff627c687af12d1c2d318b7b59ddbef2589e50877945780f52d27" exitCode=0 Jan 21 16:05:33 crc kubenswrapper[4834]: I0121 16:05:33.745956 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2pbrd" event={"ID":"b65a3e27-7598-4756-b6d1-7a8a9ad74bec","Type":"ContainerDied","Data":"09f8eb4f4c0ff627c687af12d1c2d318b7b59ddbef2589e50877945780f52d27"} Jan 21 16:05:34 crc kubenswrapper[4834]: I0121 16:05:34.124523 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:34 crc kubenswrapper[4834]: I0121 16:05:34.140231 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:05:34 crc kubenswrapper[4834]: I0121 16:05:34.140316 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:05:34 crc kubenswrapper[4834]: I0121 16:05:34.387863 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:05:34 crc kubenswrapper[4834]: I0121 16:05:34.760551 4834 generic.go:334] "Generic (PLEG): container finished" podID="2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea" containerID="ff6ba458afbbf582c7ab171632a4b753819907f5438ec518b7b5087c186e07e6" exitCode=0 Jan 21 16:05:34 crc kubenswrapper[4834]: I0121 16:05:34.760876 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f57q8" event={"ID":"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea","Type":"ContainerDied","Data":"ff6ba458afbbf582c7ab171632a4b753819907f5438ec518b7b5087c186e07e6"} Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.096177 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.162757 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-scripts\") pod \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.162894 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-combined-ca-bundle\") pod \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.163098 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-config-data\") pod \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.163142 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkh6w\" (UniqueName: \"kubernetes.io/projected/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-kube-api-access-mkh6w\") pod \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\" (UID: \"b65a3e27-7598-4756-b6d1-7a8a9ad74bec\") " Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.175424 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-scripts" (OuterVolumeSpecName: "scripts") pod "b65a3e27-7598-4756-b6d1-7a8a9ad74bec" (UID: "b65a3e27-7598-4756-b6d1-7a8a9ad74bec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.177175 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-kube-api-access-mkh6w" (OuterVolumeSpecName: "kube-api-access-mkh6w") pod "b65a3e27-7598-4756-b6d1-7a8a9ad74bec" (UID: "b65a3e27-7598-4756-b6d1-7a8a9ad74bec"). InnerVolumeSpecName "kube-api-access-mkh6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.196686 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-config-data" (OuterVolumeSpecName: "config-data") pod "b65a3e27-7598-4756-b6d1-7a8a9ad74bec" (UID: "b65a3e27-7598-4756-b6d1-7a8a9ad74bec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.208640 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b65a3e27-7598-4756-b6d1-7a8a9ad74bec" (UID: "b65a3e27-7598-4756-b6d1-7a8a9ad74bec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.265331 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.265370 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.265380 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.265389 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkh6w\" (UniqueName: \"kubernetes.io/projected/b65a3e27-7598-4756-b6d1-7a8a9ad74bec-kube-api-access-mkh6w\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.771565 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2pbrd" event={"ID":"b65a3e27-7598-4756-b6d1-7a8a9ad74bec","Type":"ContainerDied","Data":"2764b405f4952d87c22f2fee0d5bbd1d36cac3808b1caa8c6066d5ad28481a4f"} Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.771861 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2764b405f4952d87c22f2fee0d5bbd1d36cac3808b1caa8c6066d5ad28481a4f" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.771575 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2pbrd" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.890200 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:05:35 crc kubenswrapper[4834]: E0121 16:05:35.890742 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65a3e27-7598-4756-b6d1-7a8a9ad74bec" containerName="nova-cell1-conductor-db-sync" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.890764 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65a3e27-7598-4756-b6d1-7a8a9ad74bec" containerName="nova-cell1-conductor-db-sync" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.891035 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65a3e27-7598-4756-b6d1-7a8a9ad74bec" containerName="nova-cell1-conductor-db-sync" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.891760 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.908640 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.909963 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.977237 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.977316 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:35 crc kubenswrapper[4834]: I0121 16:05:35.977404 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm26x\" (UniqueName: \"kubernetes.io/projected/72f890ac-74e2-4a65-abb3-1383e236e6a9-kube-api-access-xm26x\") pod \"nova-cell1-conductor-0\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.079202 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm26x\" (UniqueName: \"kubernetes.io/projected/72f890ac-74e2-4a65-abb3-1383e236e6a9-kube-api-access-xm26x\") pod \"nova-cell1-conductor-0\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.079338 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.079364 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.084583 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.085318 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.120719 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm26x\" (UniqueName: \"kubernetes.io/projected/72f890ac-74e2-4a65-abb3-1383e236e6a9-kube-api-access-xm26x\") pod \"nova-cell1-conductor-0\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.210648 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.342344 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.385259 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-scripts\") pod \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.385766 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wvcr\" (UniqueName: \"kubernetes.io/projected/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-kube-api-access-8wvcr\") pod \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.385887 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-config-data\") pod \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.385999 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-combined-ca-bundle\") pod \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\" (UID: \"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea\") " Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.390720 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-scripts" (OuterVolumeSpecName: "scripts") pod "2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea" (UID: "2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.390786 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-kube-api-access-8wvcr" (OuterVolumeSpecName: "kube-api-access-8wvcr") pod "2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea" (UID: "2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea"). InnerVolumeSpecName "kube-api-access-8wvcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.417627 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea" (UID: "2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.433468 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-config-data" (OuterVolumeSpecName: "config-data") pod "2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea" (UID: "2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.492846 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.493126 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wvcr\" (UniqueName: \"kubernetes.io/projected/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-kube-api-access-8wvcr\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.493162 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.493179 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.700892 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.803957 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f57q8" event={"ID":"2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea","Type":"ContainerDied","Data":"61925c178d1d583f47126f757613eff4d71804ebea2f41d521969e8c201cbd7d"} Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.804013 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61925c178d1d583f47126f757613eff4d71804ebea2f41d521969e8c201cbd7d" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.803884 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f57q8" Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.808409 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"72f890ac-74e2-4a65-abb3-1383e236e6a9","Type":"ContainerStarted","Data":"02b65e391a75253c1b6b79b7f54952efee5ee713165e05dc7e2b2af0221a769b"} Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.979604 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.980499 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerName="nova-api-log" containerID="cri-o://5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581" gracePeriod=30 Jan 21 16:05:36 crc kubenswrapper[4834]: I0121 16:05:36.981069 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerName="nova-api-api" containerID="cri-o://fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc" gracePeriod=30 Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.004179 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.004528 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5b1de62e-e18b-4d18-a6d8-152df459d792" containerName="nova-scheduler-scheduler" containerID="cri-o://e4e0935605c55f9849173406b2e7383147c83882f2958a08728275c73910710c" gracePeriod=30 Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.029943 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.030458 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerName="nova-metadata-metadata" containerID="cri-o://ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81" gracePeriod=30 Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.030694 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerName="nova-metadata-log" containerID="cri-o://2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d" gracePeriod=30 Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.773417 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.782556 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.816944 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-logs\") pod \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.817013 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-config-data\") pod \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.817080 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvghs\" (UniqueName: \"kubernetes.io/projected/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-kube-api-access-hvghs\") pod \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.817107 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbm6t\" (UniqueName: \"kubernetes.io/projected/b157ad12-d8a6-4d56-b797-6515235c7d60-kube-api-access-zbm6t\") pod \"b157ad12-d8a6-4d56-b797-6515235c7d60\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.817140 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-combined-ca-bundle\") pod \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\" (UID: \"d533eb86-adfa-42c8-aabd-6dfd4a6207e9\") " Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.817178 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-config-data\") pod \"b157ad12-d8a6-4d56-b797-6515235c7d60\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.817223 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-combined-ca-bundle\") pod \"b157ad12-d8a6-4d56-b797-6515235c7d60\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.817252 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b157ad12-d8a6-4d56-b797-6515235c7d60-logs\") pod \"b157ad12-d8a6-4d56-b797-6515235c7d60\" (UID: \"b157ad12-d8a6-4d56-b797-6515235c7d60\") " Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.817989 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b157ad12-d8a6-4d56-b797-6515235c7d60-logs" (OuterVolumeSpecName: "logs") pod "b157ad12-d8a6-4d56-b797-6515235c7d60" (UID: "b157ad12-d8a6-4d56-b797-6515235c7d60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.818220 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-logs" (OuterVolumeSpecName: "logs") pod "d533eb86-adfa-42c8-aabd-6dfd4a6207e9" (UID: "d533eb86-adfa-42c8-aabd-6dfd4a6207e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.825920 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b157ad12-d8a6-4d56-b797-6515235c7d60-kube-api-access-zbm6t" (OuterVolumeSpecName: "kube-api-access-zbm6t") pod "b157ad12-d8a6-4d56-b797-6515235c7d60" (UID: "b157ad12-d8a6-4d56-b797-6515235c7d60"). InnerVolumeSpecName "kube-api-access-zbm6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.826654 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-kube-api-access-hvghs" (OuterVolumeSpecName: "kube-api-access-hvghs") pod "d533eb86-adfa-42c8-aabd-6dfd4a6207e9" (UID: "d533eb86-adfa-42c8-aabd-6dfd4a6207e9"). InnerVolumeSpecName "kube-api-access-hvghs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.841514 4834 generic.go:334] "Generic (PLEG): container finished" podID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerID="fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc" exitCode=0 Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.841551 4834 generic.go:334] "Generic (PLEG): container finished" podID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerID="5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581" exitCode=143 Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.841595 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d533eb86-adfa-42c8-aabd-6dfd4a6207e9","Type":"ContainerDied","Data":"fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc"} Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.841619 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d533eb86-adfa-42c8-aabd-6dfd4a6207e9","Type":"ContainerDied","Data":"5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581"} Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.841630 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d533eb86-adfa-42c8-aabd-6dfd4a6207e9","Type":"ContainerDied","Data":"21453dd3b20b75b3df7e1f33c99b7fc9247ad319fcf03517418707f18b5a13c2"} Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.841648 4834 scope.go:117] "RemoveContainer" containerID="fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.841811 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.854637 4834 generic.go:334] "Generic (PLEG): container finished" podID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerID="ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81" exitCode=0 Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.854680 4834 generic.go:334] "Generic (PLEG): container finished" podID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerID="2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d" exitCode=143 Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.854770 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b157ad12-d8a6-4d56-b797-6515235c7d60","Type":"ContainerDied","Data":"ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81"} Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.854812 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b157ad12-d8a6-4d56-b797-6515235c7d60","Type":"ContainerDied","Data":"2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d"} Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.854829 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b157ad12-d8a6-4d56-b797-6515235c7d60","Type":"ContainerDied","Data":"6ec294e440f1900b900dd1fe3a94a3bbbbc98831861fb775c286d2ad0433c6d6"} Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.854917 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.861567 4834 generic.go:334] "Generic (PLEG): container finished" podID="5b1de62e-e18b-4d18-a6d8-152df459d792" containerID="e4e0935605c55f9849173406b2e7383147c83882f2958a08728275c73910710c" exitCode=0 Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.861699 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b1de62e-e18b-4d18-a6d8-152df459d792","Type":"ContainerDied","Data":"e4e0935605c55f9849173406b2e7383147c83882f2958a08728275c73910710c"} Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.865362 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b157ad12-d8a6-4d56-b797-6515235c7d60" (UID: "b157ad12-d8a6-4d56-b797-6515235c7d60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.867009 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"72f890ac-74e2-4a65-abb3-1383e236e6a9","Type":"ContainerStarted","Data":"1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b"} Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.867256 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.869244 4834 scope.go:117] "RemoveContainer" containerID="5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.887009 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.886993263 podStartE2EDuration="2.886993263s" podCreationTimestamp="2026-01-21 16:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:37.884405932 +0000 UTC m=+5683.858754977" watchObservedRunningTime="2026-01-21 16:05:37.886993263 +0000 UTC m=+5683.861342308" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.887698 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d533eb86-adfa-42c8-aabd-6dfd4a6207e9" (UID: "d533eb86-adfa-42c8-aabd-6dfd4a6207e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.892577 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-config-data" (OuterVolumeSpecName: "config-data") pod "d533eb86-adfa-42c8-aabd-6dfd4a6207e9" (UID: "d533eb86-adfa-42c8-aabd-6dfd4a6207e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.901313 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-config-data" (OuterVolumeSpecName: "config-data") pod "b157ad12-d8a6-4d56-b797-6515235c7d60" (UID: "b157ad12-d8a6-4d56-b797-6515235c7d60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.907057 4834 scope.go:117] "RemoveContainer" containerID="fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc" Jan 21 16:05:37 crc kubenswrapper[4834]: E0121 16:05:37.907496 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc\": container with ID starting with fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc not found: ID does not exist" containerID="fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.907524 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc"} err="failed to get container status \"fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc\": rpc error: code = NotFound desc = could not find container \"fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc\": container with ID starting with fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc not found: ID does not exist" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.907547 4834 scope.go:117] "RemoveContainer" containerID="5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581" Jan 21 16:05:37 crc kubenswrapper[4834]: E0121 16:05:37.907912 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581\": container with ID starting with 5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581 not found: ID does not exist" containerID="5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.907991 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581"} err="failed to get container status \"5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581\": rpc error: code = NotFound desc = could not find container \"5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581\": container with ID starting with 5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581 not found: ID does not exist" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.908031 4834 scope.go:117] "RemoveContainer" containerID="fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.908268 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc"} err="failed to get container status \"fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc\": rpc error: code = NotFound desc = could not find container \"fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc\": container with ID starting with fe4f1bbbce9a88695c32db75c4461ed736b1df49bb4bcc6d4fbf7ec39a5539bc not found: ID does not exist" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.908313 4834 scope.go:117] "RemoveContainer" containerID="5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.908583 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581"} err="failed to get container status \"5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581\": rpc error: code = NotFound desc = could not find container \"5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581\": container with ID starting with 5822b462a85e2fd1a6ee4b1b5d47cf7b80c6e1b735a945721d6dd0559399c581 not found: ID does not exist" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.908602 4834 scope.go:117] "RemoveContainer" containerID="ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.921682 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvghs\" (UniqueName: \"kubernetes.io/projected/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-kube-api-access-hvghs\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.921754 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbm6t\" (UniqueName: \"kubernetes.io/projected/b157ad12-d8a6-4d56-b797-6515235c7d60-kube-api-access-zbm6t\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.921769 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.921781 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.921797 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b157ad12-d8a6-4d56-b797-6515235c7d60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.921810 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b157ad12-d8a6-4d56-b797-6515235c7d60-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.921826 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.921841 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d533eb86-adfa-42c8-aabd-6dfd4a6207e9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.935094 4834 scope.go:117] "RemoveContainer" containerID="2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.962701 4834 scope.go:117] "RemoveContainer" containerID="ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81" Jan 21 16:05:37 crc kubenswrapper[4834]: E0121 16:05:37.964239 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81\": container with ID starting with ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81 not found: ID does not exist" containerID="ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.964271 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81"} err="failed to get container status \"ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81\": rpc error: code = NotFound desc = could not find container \"ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81\": container with ID starting with ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81 not found: ID does not exist" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.964292 4834 scope.go:117] "RemoveContainer" containerID="2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d" Jan 21 16:05:37 crc kubenswrapper[4834]: E0121 16:05:37.966343 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d\": container with ID starting with 2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d not found: ID does not exist" containerID="2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.966364 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d"} err="failed to get container status \"2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d\": rpc error: code = NotFound desc = could not find container \"2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d\": container with ID starting with 2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d not found: ID does not exist" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.966379 4834 scope.go:117] "RemoveContainer" containerID="ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.966576 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81"} err="failed to get container status \"ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81\": rpc error: code = NotFound desc = could not find container \"ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81\": container with ID starting with ba252356f95c88d76b3d2e7f2d7ed68484292bc52fa672faf65ddf17f3d64f81 not found: ID does not exist" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.966592 4834 scope.go:117] "RemoveContainer" containerID="2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d" Jan 21 16:05:37 crc kubenswrapper[4834]: I0121 16:05:37.966794 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d"} err="failed to get container status \"2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d\": rpc error: code = NotFound desc = could not find container \"2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d\": container with ID starting with 2f0656ab8c0311d5a0f3f92cfdf0cba5af01247b52d1965df08f3a1050323d6d not found: ID does not exist" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.156581 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.221830 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.229384 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-combined-ca-bundle\") pod \"5b1de62e-e18b-4d18-a6d8-152df459d792\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.229453 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99j4g\" (UniqueName: \"kubernetes.io/projected/5b1de62e-e18b-4d18-a6d8-152df459d792-kube-api-access-99j4g\") pod \"5b1de62e-e18b-4d18-a6d8-152df459d792\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.229500 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-config-data\") pod \"5b1de62e-e18b-4d18-a6d8-152df459d792\" (UID: \"5b1de62e-e18b-4d18-a6d8-152df459d792\") " Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.240182 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b1de62e-e18b-4d18-a6d8-152df459d792-kube-api-access-99j4g" (OuterVolumeSpecName: "kube-api-access-99j4g") pod "5b1de62e-e18b-4d18-a6d8-152df459d792" (UID: "5b1de62e-e18b-4d18-a6d8-152df459d792"). InnerVolumeSpecName "kube-api-access-99j4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.248954 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.262702 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-config-data" (OuterVolumeSpecName: "config-data") pod "5b1de62e-e18b-4d18-a6d8-152df459d792" (UID: "5b1de62e-e18b-4d18-a6d8-152df459d792"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.271246 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.277295 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.291595 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:38 crc kubenswrapper[4834]: E0121 16:05:38.295123 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerName="nova-metadata-metadata" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295156 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerName="nova-metadata-metadata" Jan 21 16:05:38 crc kubenswrapper[4834]: E0121 16:05:38.295182 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea" containerName="nova-manage" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295190 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea" containerName="nova-manage" Jan 21 16:05:38 crc kubenswrapper[4834]: E0121 16:05:38.295206 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1de62e-e18b-4d18-a6d8-152df459d792" containerName="nova-scheduler-scheduler" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295212 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1de62e-e18b-4d18-a6d8-152df459d792" containerName="nova-scheduler-scheduler" Jan 21 16:05:38 crc kubenswrapper[4834]: E0121 16:05:38.295223 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerName="nova-metadata-log" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295229 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerName="nova-metadata-log" Jan 21 16:05:38 crc kubenswrapper[4834]: E0121 16:05:38.295241 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerName="nova-api-api" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295247 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerName="nova-api-api" Jan 21 16:05:38 crc kubenswrapper[4834]: E0121 16:05:38.295277 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerName="nova-api-log" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295286 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerName="nova-api-log" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295501 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerName="nova-api-log" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295515 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1de62e-e18b-4d18-a6d8-152df459d792" containerName="nova-scheduler-scheduler" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295526 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea" containerName="nova-manage" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295542 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerName="nova-metadata-metadata" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295552 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="b157ad12-d8a6-4d56-b797-6515235c7d60" containerName="nova-metadata-log" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.295574 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" containerName="nova-api-api" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.296957 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.300852 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.311281 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b1de62e-e18b-4d18-a6d8-152df459d792" (UID: "5b1de62e-e18b-4d18-a6d8-152df459d792"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.324082 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.335532 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbf26\" (UniqueName: \"kubernetes.io/projected/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-kube-api-access-mbf26\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.335721 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-config-data\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.335795 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.335904 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-logs\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.336074 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.336916 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99j4g\" (UniqueName: \"kubernetes.io/projected/5b1de62e-e18b-4d18-a6d8-152df459d792-kube-api-access-99j4g\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.336945 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1de62e-e18b-4d18-a6d8-152df459d792-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.351522 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b157ad12-d8a6-4d56-b797-6515235c7d60" path="/var/lib/kubelet/pods/b157ad12-d8a6-4d56-b797-6515235c7d60/volumes" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.352708 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d533eb86-adfa-42c8-aabd-6dfd4a6207e9" path="/var/lib/kubelet/pods/d533eb86-adfa-42c8-aabd-6dfd4a6207e9/volumes" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.376364 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.378941 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.383510 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.396516 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.444565 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbf26\" (UniqueName: \"kubernetes.io/projected/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-kube-api-access-mbf26\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.444705 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qz4\" (UniqueName: \"kubernetes.io/projected/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-kube-api-access-b7qz4\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.444748 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-config-data\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.444778 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.444799 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.444989 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-config-data\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.445200 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-logs\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.445548 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-logs\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.446327 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-logs\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.452952 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.453059 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-config-data\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.494430 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbf26\" (UniqueName: \"kubernetes.io/projected/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-kube-api-access-mbf26\") pod \"nova-api-0\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.553953 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-logs\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.554092 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qz4\" (UniqueName: \"kubernetes.io/projected/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-kube-api-access-b7qz4\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.554160 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.554220 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-config-data\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.554563 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-logs\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.558555 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.559214 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-config-data\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.578244 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qz4\" (UniqueName: \"kubernetes.io/projected/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-kube-api-access-b7qz4\") pod \"nova-metadata-0\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.666329 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.711856 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.883016 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.883074 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5b1de62e-e18b-4d18-a6d8-152df459d792","Type":"ContainerDied","Data":"8f6ecabe7a7d3a7795ba2e4f0a9c6f3898250b4f79e87291e2f92a93745d641e"} Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.883507 4834 scope.go:117] "RemoveContainer" containerID="e4e0935605c55f9849173406b2e7383147c83882f2958a08728275c73910710c" Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.937088 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:38 crc kubenswrapper[4834]: I0121 16:05:38.959725 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.004009 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.007122 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.010974 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.025971 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.079438 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfhfp\" (UniqueName: \"kubernetes.io/projected/00bae74c-5262-4d4c-b7e1-08ff7233812a-kube-api-access-kfhfp\") pod \"nova-scheduler-0\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.079500 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.079599 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-config-data\") pod \"nova-scheduler-0\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.124627 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.139497 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.140861 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.181401 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-config-data\") pod \"nova-scheduler-0\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.181561 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfhfp\" (UniqueName: \"kubernetes.io/projected/00bae74c-5262-4d4c-b7e1-08ff7233812a-kube-api-access-kfhfp\") pod \"nova-scheduler-0\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.181605 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.190480 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-config-data\") pod \"nova-scheduler-0\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.191759 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.199394 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfhfp\" (UniqueName: \"kubernetes.io/projected/00bae74c-5262-4d4c-b7e1-08ff7233812a-kube-api-access-kfhfp\") pod \"nova-scheduler-0\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.264000 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:39 crc kubenswrapper[4834]: W0121 16:05:39.264721 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43b5139a_6bc0_4cab_8c7b_efd0a7ca90b3.slice/crio-1864e048dffa6f960ed73b382c72788c87ab64c8cb57b6d06d1afbf1ab0c7aa7 WatchSource:0}: Error finding container 1864e048dffa6f960ed73b382c72788c87ab64c8cb57b6d06d1afbf1ab0c7aa7: Status 404 returned error can't find the container with id 1864e048dffa6f960ed73b382c72788c87ab64c8cb57b6d06d1afbf1ab0c7aa7 Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.340100 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.401358 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.493296 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-796897d589-vfltd"] Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.493855 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-796897d589-vfltd" podUID="733dfef0-c07f-4030-858b-c5a1813ccaaf" containerName="dnsmasq-dns" containerID="cri-o://2590c4c18430891c6f3b8350e3447ac0066d30e7855cb0feb033b6433e6d2dfa" gracePeriod=10 Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.916000 4834 generic.go:334] "Generic (PLEG): container finished" podID="733dfef0-c07f-4030-858b-c5a1813ccaaf" containerID="2590c4c18430891c6f3b8350e3447ac0066d30e7855cb0feb033b6433e6d2dfa" exitCode=0 Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.916303 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796897d589-vfltd" event={"ID":"733dfef0-c07f-4030-858b-c5a1813ccaaf","Type":"ContainerDied","Data":"2590c4c18430891c6f3b8350e3447ac0066d30e7855cb0feb033b6433e6d2dfa"} Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.919563 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6","Type":"ContainerStarted","Data":"6284a1681312ee0e5594d56b34125f194b7dff42fa4e6be9fe36685eed8ccbc7"} Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.919591 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6","Type":"ContainerStarted","Data":"76241acef38e812ba068e2bdee8602c81d62a921c50d57e251336e42945889c1"} Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.919606 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6","Type":"ContainerStarted","Data":"95d2a3483275b073a10f260d852fba9c095649e97d97c82119f35dd1eeb39ac0"} Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.920842 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.926401 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3","Type":"ContainerStarted","Data":"4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054"} Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.926441 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3","Type":"ContainerStarted","Data":"535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc"} Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.926453 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3","Type":"ContainerStarted","Data":"1864e048dffa6f960ed73b382c72788c87ab64c8cb57b6d06d1afbf1ab0c7aa7"} Jan 21 16:05:39 crc kubenswrapper[4834]: W0121 16:05:39.929757 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00bae74c_5262_4d4c_b7e1_08ff7233812a.slice/crio-f3e04c576ea2c5dffdb69022db0f448032a6a8d45f51409c6b275c83dcea9362 WatchSource:0}: Error finding container f3e04c576ea2c5dffdb69022db0f448032a6a8d45f51409c6b275c83dcea9362: Status 404 returned error can't find the container with id f3e04c576ea2c5dffdb69022db0f448032a6a8d45f51409c6b275c83dcea9362 Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.944607 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:05:39 crc kubenswrapper[4834]: I0121 16:05:39.969962 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.9699086829999999 podStartE2EDuration="1.969908683s" podCreationTimestamp="2026-01-21 16:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:39.961643215 +0000 UTC m=+5685.935992270" watchObservedRunningTime="2026-01-21 16:05:39.969908683 +0000 UTC m=+5685.944257728" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.044743 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.044683916 podStartE2EDuration="2.044683916s" podCreationTimestamp="2026-01-21 16:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:40.036052727 +0000 UTC m=+5686.010401762" watchObservedRunningTime="2026-01-21 16:05:40.044683916 +0000 UTC m=+5686.019032961" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.152125 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.211570 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-sb\") pod \"733dfef0-c07f-4030-858b-c5a1813ccaaf\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.211620 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpjfq\" (UniqueName: \"kubernetes.io/projected/733dfef0-c07f-4030-858b-c5a1813ccaaf-kube-api-access-rpjfq\") pod \"733dfef0-c07f-4030-858b-c5a1813ccaaf\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.211744 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-config\") pod \"733dfef0-c07f-4030-858b-c5a1813ccaaf\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.211788 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-dns-svc\") pod \"733dfef0-c07f-4030-858b-c5a1813ccaaf\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.211840 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-nb\") pod \"733dfef0-c07f-4030-858b-c5a1813ccaaf\" (UID: \"733dfef0-c07f-4030-858b-c5a1813ccaaf\") " Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.230491 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/733dfef0-c07f-4030-858b-c5a1813ccaaf-kube-api-access-rpjfq" (OuterVolumeSpecName: "kube-api-access-rpjfq") pod "733dfef0-c07f-4030-858b-c5a1813ccaaf" (UID: "733dfef0-c07f-4030-858b-c5a1813ccaaf"). InnerVolumeSpecName "kube-api-access-rpjfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.280043 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "733dfef0-c07f-4030-858b-c5a1813ccaaf" (UID: "733dfef0-c07f-4030-858b-c5a1813ccaaf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.289821 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "733dfef0-c07f-4030-858b-c5a1813ccaaf" (UID: "733dfef0-c07f-4030-858b-c5a1813ccaaf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.306364 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-config" (OuterVolumeSpecName: "config") pod "733dfef0-c07f-4030-858b-c5a1813ccaaf" (UID: "733dfef0-c07f-4030-858b-c5a1813ccaaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.309585 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "733dfef0-c07f-4030-858b-c5a1813ccaaf" (UID: "733dfef0-c07f-4030-858b-c5a1813ccaaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.318043 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.318080 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.318094 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.318107 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/733dfef0-c07f-4030-858b-c5a1813ccaaf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.318117 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpjfq\" (UniqueName: \"kubernetes.io/projected/733dfef0-c07f-4030-858b-c5a1813ccaaf-kube-api-access-rpjfq\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.362535 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b1de62e-e18b-4d18-a6d8-152df459d792" path="/var/lib/kubelet/pods/5b1de62e-e18b-4d18-a6d8-152df459d792/volumes" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.935138 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00bae74c-5262-4d4c-b7e1-08ff7233812a","Type":"ContainerStarted","Data":"d19d142a5007d0ff6ef54febc7c9d1bb1c3cee2e42564b087cea76a1f767a952"} Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.935553 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00bae74c-5262-4d4c-b7e1-08ff7233812a","Type":"ContainerStarted","Data":"f3e04c576ea2c5dffdb69022db0f448032a6a8d45f51409c6b275c83dcea9362"} Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.938107 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796897d589-vfltd" event={"ID":"733dfef0-c07f-4030-858b-c5a1813ccaaf","Type":"ContainerDied","Data":"b0ea5a60fe3ed49f59c8763c43d00527516a2a620abe96c58ed70e601de61e83"} Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.938188 4834 scope.go:117] "RemoveContainer" containerID="2590c4c18430891c6f3b8350e3447ac0066d30e7855cb0feb033b6433e6d2dfa" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.938418 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796897d589-vfltd" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.967651 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.967627138 podStartE2EDuration="2.967627138s" podCreationTimestamp="2026-01-21 16:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:40.964507011 +0000 UTC m=+5686.938856066" watchObservedRunningTime="2026-01-21 16:05:40.967627138 +0000 UTC m=+5686.941976183" Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.990350 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-796897d589-vfltd"] Jan 21 16:05:40 crc kubenswrapper[4834]: I0121 16:05:40.996800 4834 scope.go:117] "RemoveContainer" containerID="fbff18b0cbd53887636292fa316134997bf1e9fe0d54cb2c80c0b75d9f0da8ea" Jan 21 16:05:41 crc kubenswrapper[4834]: I0121 16:05:41.005556 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-796897d589-vfltd"] Jan 21 16:05:41 crc kubenswrapper[4834]: I0121 16:05:41.242393 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.030336 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4899h"] Jan 21 16:05:42 crc kubenswrapper[4834]: E0121 16:05:42.030865 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="733dfef0-c07f-4030-858b-c5a1813ccaaf" containerName="init" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.030888 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="733dfef0-c07f-4030-858b-c5a1813ccaaf" containerName="init" Jan 21 16:05:42 crc kubenswrapper[4834]: E0121 16:05:42.030916 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="733dfef0-c07f-4030-858b-c5a1813ccaaf" containerName="dnsmasq-dns" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.030941 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="733dfef0-c07f-4030-858b-c5a1813ccaaf" containerName="dnsmasq-dns" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.031180 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="733dfef0-c07f-4030-858b-c5a1813ccaaf" containerName="dnsmasq-dns" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.032541 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.035585 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.044870 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4899h"] Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.048043 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.162282 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.162347 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-scripts\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.162505 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-config-data\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.162622 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4fl4\" (UniqueName: \"kubernetes.io/projected/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-kube-api-access-s4fl4\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.265047 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.265111 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-scripts\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.265181 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-config-data\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.265221 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4fl4\" (UniqueName: \"kubernetes.io/projected/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-kube-api-access-s4fl4\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.272079 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.272564 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-scripts\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.283447 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-config-data\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.285370 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4fl4\" (UniqueName: \"kubernetes.io/projected/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-kube-api-access-s4fl4\") pod \"nova-cell1-cell-mapping-4899h\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.338766 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="733dfef0-c07f-4030-858b-c5a1813ccaaf" path="/var/lib/kubelet/pods/733dfef0-c07f-4030-858b-c5a1813ccaaf/volumes" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.357203 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.815571 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4899h"] Jan 21 16:05:42 crc kubenswrapper[4834]: I0121 16:05:42.965872 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4899h" event={"ID":"d51eb0eb-3a6b-4022-918d-ace6fad9a93e","Type":"ContainerStarted","Data":"cc535827734bb846b22ff0e22a14036ad5b6d2a1a8cd550bd15f34988cee1b1a"} Jan 21 16:05:43 crc kubenswrapper[4834]: I0121 16:05:43.712340 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:05:43 crc kubenswrapper[4834]: I0121 16:05:43.712649 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:05:43 crc kubenswrapper[4834]: I0121 16:05:43.976461 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4899h" event={"ID":"d51eb0eb-3a6b-4022-918d-ace6fad9a93e","Type":"ContainerStarted","Data":"53b1f07ea3d4b85a75fadca3f9dd1aba6a96dabbc168062a8e560f8e418b852a"} Jan 21 16:05:44 crc kubenswrapper[4834]: I0121 16:05:44.006738 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4899h" podStartSLOduration=3.006694456 podStartE2EDuration="3.006694456s" podCreationTimestamp="2026-01-21 16:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:43.999362557 +0000 UTC m=+5689.973711612" watchObservedRunningTime="2026-01-21 16:05:44.006694456 +0000 UTC m=+5689.981043501" Jan 21 16:05:44 crc kubenswrapper[4834]: I0121 16:05:44.341110 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:05:47 crc kubenswrapper[4834]: I0121 16:05:47.114099 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:05:47 crc kubenswrapper[4834]: I0121 16:05:47.114475 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:05:47 crc kubenswrapper[4834]: I0121 16:05:47.114518 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:05:47 crc kubenswrapper[4834]: I0121 16:05:47.115181 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:05:47 crc kubenswrapper[4834]: I0121 16:05:47.115235 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" gracePeriod=600 Jan 21 16:05:47 crc kubenswrapper[4834]: E0121 16:05:47.257684 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.013459 4834 generic.go:334] "Generic (PLEG): container finished" podID="d51eb0eb-3a6b-4022-918d-ace6fad9a93e" containerID="53b1f07ea3d4b85a75fadca3f9dd1aba6a96dabbc168062a8e560f8e418b852a" exitCode=0 Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.013542 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4899h" event={"ID":"d51eb0eb-3a6b-4022-918d-ace6fad9a93e","Type":"ContainerDied","Data":"53b1f07ea3d4b85a75fadca3f9dd1aba6a96dabbc168062a8e560f8e418b852a"} Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.018092 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" exitCode=0 Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.018143 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd"} Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.018208 4834 scope.go:117] "RemoveContainer" containerID="d3bc39a16cddf51fe4af8641c77c90c2682ec58f862bb19d7774d650115f85e6" Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.019077 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:05:48 crc kubenswrapper[4834]: E0121 16:05:48.019470 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.667540 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.671303 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.713327 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:05:48 crc kubenswrapper[4834]: I0121 16:05:48.713383 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.340884 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.372225 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.449379 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.517058 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-scripts\") pod \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.517231 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-config-data\") pod \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.517300 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4fl4\" (UniqueName: \"kubernetes.io/projected/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-kube-api-access-s4fl4\") pod \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.517512 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-combined-ca-bundle\") pod \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\" (UID: \"d51eb0eb-3a6b-4022-918d-ace6fad9a93e\") " Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.525111 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-scripts" (OuterVolumeSpecName: "scripts") pod "d51eb0eb-3a6b-4022-918d-ace6fad9a93e" (UID: "d51eb0eb-3a6b-4022-918d-ace6fad9a93e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.539338 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-kube-api-access-s4fl4" (OuterVolumeSpecName: "kube-api-access-s4fl4") pod "d51eb0eb-3a6b-4022-918d-ace6fad9a93e" (UID: "d51eb0eb-3a6b-4022-918d-ace6fad9a93e"). InnerVolumeSpecName "kube-api-access-s4fl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.550825 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d51eb0eb-3a6b-4022-918d-ace6fad9a93e" (UID: "d51eb0eb-3a6b-4022-918d-ace6fad9a93e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.551733 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-config-data" (OuterVolumeSpecName: "config-data") pod "d51eb0eb-3a6b-4022-918d-ace6fad9a93e" (UID: "d51eb0eb-3a6b-4022-918d-ace6fad9a93e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.621273 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.621908 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.621984 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.622048 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4fl4\" (UniqueName: \"kubernetes.io/projected/d51eb0eb-3a6b-4022-918d-ace6fad9a93e-kube-api-access-s4fl4\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.749203 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.64:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.749199 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.64:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.832115 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:05:49 crc kubenswrapper[4834]: I0121 16:05:49.832171 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.65:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.045104 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4899h" event={"ID":"d51eb0eb-3a6b-4022-918d-ace6fad9a93e","Type":"ContainerDied","Data":"cc535827734bb846b22ff0e22a14036ad5b6d2a1a8cd550bd15f34988cee1b1a"} Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.045181 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc535827734bb846b22ff0e22a14036ad5b6d2a1a8cd550bd15f34988cee1b1a" Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.045625 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4899h" Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.095027 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.221788 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.222364 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-log" containerID="cri-o://76241acef38e812ba068e2bdee8602c81d62a921c50d57e251336e42945889c1" gracePeriod=30 Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.222819 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-api" containerID="cri-o://6284a1681312ee0e5594d56b34125f194b7dff42fa4e6be9fe36685eed8ccbc7" gracePeriod=30 Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.256022 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.256302 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-log" containerID="cri-o://535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc" gracePeriod=30 Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.256519 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-metadata" containerID="cri-o://4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054" gracePeriod=30 Jan 21 16:05:50 crc kubenswrapper[4834]: I0121 16:05:50.786383 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:51 crc kubenswrapper[4834]: I0121 16:05:51.054583 4834 generic.go:334] "Generic (PLEG): container finished" podID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerID="76241acef38e812ba068e2bdee8602c81d62a921c50d57e251336e42945889c1" exitCode=143 Jan 21 16:05:51 crc kubenswrapper[4834]: I0121 16:05:51.054658 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6","Type":"ContainerDied","Data":"76241acef38e812ba068e2bdee8602c81d62a921c50d57e251336e42945889c1"} Jan 21 16:05:51 crc kubenswrapper[4834]: I0121 16:05:51.057724 4834 generic.go:334] "Generic (PLEG): container finished" podID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerID="535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc" exitCode=143 Jan 21 16:05:51 crc kubenswrapper[4834]: I0121 16:05:51.058312 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3","Type":"ContainerDied","Data":"535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc"} Jan 21 16:05:52 crc kubenswrapper[4834]: I0121 16:05:52.066897 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="00bae74c-5262-4d4c-b7e1-08ff7233812a" containerName="nova-scheduler-scheduler" containerID="cri-o://d19d142a5007d0ff6ef54febc7c9d1bb1c3cee2e42564b087cea76a1f767a952" gracePeriod=30 Jan 21 16:05:54 crc kubenswrapper[4834]: E0121 16:05:54.342836 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d19d142a5007d0ff6ef54febc7c9d1bb1c3cee2e42564b087cea76a1f767a952" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 16:05:54 crc kubenswrapper[4834]: E0121 16:05:54.344463 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d19d142a5007d0ff6ef54febc7c9d1bb1c3cee2e42564b087cea76a1f767a952" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 16:05:54 crc kubenswrapper[4834]: E0121 16:05:54.345841 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d19d142a5007d0ff6ef54febc7c9d1bb1c3cee2e42564b087cea76a1f767a952" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 16:05:54 crc kubenswrapper[4834]: E0121 16:05:54.345986 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="00bae74c-5262-4d4c-b7e1-08ff7233812a" containerName="nova-scheduler-scheduler" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.024618 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.095802 4834 generic.go:334] "Generic (PLEG): container finished" podID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerID="4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054" exitCode=0 Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.095864 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3","Type":"ContainerDied","Data":"4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054"} Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.095897 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3","Type":"ContainerDied","Data":"1864e048dffa6f960ed73b382c72788c87ab64c8cb57b6d06d1afbf1ab0c7aa7"} Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.095919 4834 scope.go:117] "RemoveContainer" containerID="4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.096185 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.123799 4834 scope.go:117] "RemoveContainer" containerID="535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.129715 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-config-data\") pod \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.129760 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-combined-ca-bundle\") pod \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.129850 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7qz4\" (UniqueName: \"kubernetes.io/projected/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-kube-api-access-b7qz4\") pod \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.129889 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-logs\") pod \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\" (UID: \"43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3\") " Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.130670 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-logs" (OuterVolumeSpecName: "logs") pod "43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" (UID: "43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.135103 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-kube-api-access-b7qz4" (OuterVolumeSpecName: "kube-api-access-b7qz4") pod "43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" (UID: "43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3"). InnerVolumeSpecName "kube-api-access-b7qz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.147520 4834 scope.go:117] "RemoveContainer" containerID="4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054" Jan 21 16:05:55 crc kubenswrapper[4834]: E0121 16:05:55.147965 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054\": container with ID starting with 4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054 not found: ID does not exist" containerID="4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.148013 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054"} err="failed to get container status \"4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054\": rpc error: code = NotFound desc = could not find container \"4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054\": container with ID starting with 4e953d2a963b812ffcb3670cb1c947659f79033018f7172b439a1f5e1ca99054 not found: ID does not exist" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.148037 4834 scope.go:117] "RemoveContainer" containerID="535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc" Jan 21 16:05:55 crc kubenswrapper[4834]: E0121 16:05:55.148299 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc\": container with ID starting with 535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc not found: ID does not exist" containerID="535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.148348 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc"} err="failed to get container status \"535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc\": rpc error: code = NotFound desc = could not find container \"535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc\": container with ID starting with 535938639bd059bc74da6bd236a4e1e3def7f774051fb5f4e702cb8d419744fc not found: ID does not exist" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.161715 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-config-data" (OuterVolumeSpecName: "config-data") pod "43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" (UID: "43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.167102 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" (UID: "43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.231808 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.231858 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.231871 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7qz4\" (UniqueName: \"kubernetes.io/projected/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-kube-api-access-b7qz4\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.231882 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.436158 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.454113 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.473108 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:55 crc kubenswrapper[4834]: E0121 16:05:55.473735 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-log" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.473763 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-log" Jan 21 16:05:55 crc kubenswrapper[4834]: E0121 16:05:55.473800 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51eb0eb-3a6b-4022-918d-ace6fad9a93e" containerName="nova-manage" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.473810 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51eb0eb-3a6b-4022-918d-ace6fad9a93e" containerName="nova-manage" Jan 21 16:05:55 crc kubenswrapper[4834]: E0121 16:05:55.473819 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-metadata" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.473829 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-metadata" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.474064 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51eb0eb-3a6b-4022-918d-ace6fad9a93e" containerName="nova-manage" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.474097 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-metadata" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.474113 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" containerName="nova-metadata-log" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.475561 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.477999 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.492983 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.536525 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-config-data\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.536610 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf7ce0f-5d02-4da5-b659-05af1350a7f1-logs\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.536868 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.536975 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvxrj\" (UniqueName: \"kubernetes.io/projected/abf7ce0f-5d02-4da5-b659-05af1350a7f1-kube-api-access-dvxrj\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.638951 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf7ce0f-5d02-4da5-b659-05af1350a7f1-logs\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.639094 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.639133 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvxrj\" (UniqueName: \"kubernetes.io/projected/abf7ce0f-5d02-4da5-b659-05af1350a7f1-kube-api-access-dvxrj\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.639178 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-config-data\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.639766 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf7ce0f-5d02-4da5-b659-05af1350a7f1-logs\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.643587 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-config-data\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.644272 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.655270 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvxrj\" (UniqueName: \"kubernetes.io/projected/abf7ce0f-5d02-4da5-b659-05af1350a7f1-kube-api-access-dvxrj\") pod \"nova-metadata-0\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " pod="openstack/nova-metadata-0" Jan 21 16:05:55 crc kubenswrapper[4834]: I0121 16:05:55.802420 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.110350 4834 generic.go:334] "Generic (PLEG): container finished" podID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerID="6284a1681312ee0e5594d56b34125f194b7dff42fa4e6be9fe36685eed8ccbc7" exitCode=0 Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.110481 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6","Type":"ContainerDied","Data":"6284a1681312ee0e5594d56b34125f194b7dff42fa4e6be9fe36685eed8ccbc7"} Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.253473 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.340184 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3" path="/var/lib/kubelet/pods/43b5139a-6bc0-4cab-8c7b-efd0a7ca90b3/volumes" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.348105 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.353798 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-config-data\") pod \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.354045 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-combined-ca-bundle\") pod \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.354200 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-logs\") pod \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.354311 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbf26\" (UniqueName: \"kubernetes.io/projected/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-kube-api-access-mbf26\") pod \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\" (UID: \"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6\") " Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.354864 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-logs" (OuterVolumeSpecName: "logs") pod "0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" (UID: "0f7487a7-e1ae-48d4-96db-0d21d3b66ad6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.359354 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-kube-api-access-mbf26" (OuterVolumeSpecName: "kube-api-access-mbf26") pod "0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" (UID: "0f7487a7-e1ae-48d4-96db-0d21d3b66ad6"). InnerVolumeSpecName "kube-api-access-mbf26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.376825 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-config-data" (OuterVolumeSpecName: "config-data") pod "0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" (UID: "0f7487a7-e1ae-48d4-96db-0d21d3b66ad6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.384212 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" (UID: "0f7487a7-e1ae-48d4-96db-0d21d3b66ad6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.456421 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.456459 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.456472 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:56 crc kubenswrapper[4834]: I0121 16:05:56.456485 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbf26\" (UniqueName: \"kubernetes.io/projected/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6-kube-api-access-mbf26\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.123803 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abf7ce0f-5d02-4da5-b659-05af1350a7f1","Type":"ContainerStarted","Data":"a8dae434e29611ebb5a179b8059457be0063e1e5074803392d962ace277d0b2c"} Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.124201 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abf7ce0f-5d02-4da5-b659-05af1350a7f1","Type":"ContainerStarted","Data":"ef4fe6c7727b30f3be31cb7eff75f90a294ec8656a5a885e23cd7ac8569894f5"} Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.124218 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abf7ce0f-5d02-4da5-b659-05af1350a7f1","Type":"ContainerStarted","Data":"26cf783b1d35e450cb3438146a22598ae1c984df620d59b3e6e05d409cbbde50"} Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.125832 4834 generic.go:334] "Generic (PLEG): container finished" podID="00bae74c-5262-4d4c-b7e1-08ff7233812a" containerID="d19d142a5007d0ff6ef54febc7c9d1bb1c3cee2e42564b087cea76a1f767a952" exitCode=0 Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.125877 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00bae74c-5262-4d4c-b7e1-08ff7233812a","Type":"ContainerDied","Data":"d19d142a5007d0ff6ef54febc7c9d1bb1c3cee2e42564b087cea76a1f767a952"} Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.129241 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f7487a7-e1ae-48d4-96db-0d21d3b66ad6","Type":"ContainerDied","Data":"95d2a3483275b073a10f260d852fba9c095649e97d97c82119f35dd1eeb39ac0"} Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.129295 4834 scope.go:117] "RemoveContainer" containerID="6284a1681312ee0e5594d56b34125f194b7dff42fa4e6be9fe36685eed8ccbc7" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.129456 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.147041 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.147016556 podStartE2EDuration="2.147016556s" podCreationTimestamp="2026-01-21 16:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:57.140590767 +0000 UTC m=+5703.114939822" watchObservedRunningTime="2026-01-21 16:05:57.147016556 +0000 UTC m=+5703.121365601" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.168007 4834 scope.go:117] "RemoveContainer" containerID="76241acef38e812ba068e2bdee8602c81d62a921c50d57e251336e42945889c1" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.169920 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.217535 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.233370 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:57 crc kubenswrapper[4834]: E0121 16:05:57.233825 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-log" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.233845 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-log" Jan 21 16:05:57 crc kubenswrapper[4834]: E0121 16:05:57.233860 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-api" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.233868 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-api" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.234070 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-api" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.234082 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" containerName="nova-api-log" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.235035 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.237781 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.250578 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.363441 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.383443 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.383509 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c225f7dc-2224-4c32-a888-b9e56d52136f-logs\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.383551 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf5bw\" (UniqueName: \"kubernetes.io/projected/c225f7dc-2224-4c32-a888-b9e56d52136f-kube-api-access-cf5bw\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.383637 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-config-data\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.484775 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfhfp\" (UniqueName: \"kubernetes.io/projected/00bae74c-5262-4d4c-b7e1-08ff7233812a-kube-api-access-kfhfp\") pod \"00bae74c-5262-4d4c-b7e1-08ff7233812a\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.486175 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-config-data\") pod \"00bae74c-5262-4d4c-b7e1-08ff7233812a\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.486529 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-combined-ca-bundle\") pod \"00bae74c-5262-4d4c-b7e1-08ff7233812a\" (UID: \"00bae74c-5262-4d4c-b7e1-08ff7233812a\") " Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.486839 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf5bw\" (UniqueName: \"kubernetes.io/projected/c225f7dc-2224-4c32-a888-b9e56d52136f-kube-api-access-cf5bw\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.487024 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-config-data\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.487252 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.487398 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c225f7dc-2224-4c32-a888-b9e56d52136f-logs\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.488168 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c225f7dc-2224-4c32-a888-b9e56d52136f-logs\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.493064 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.493164 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-config-data\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.493767 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00bae74c-5262-4d4c-b7e1-08ff7233812a-kube-api-access-kfhfp" (OuterVolumeSpecName: "kube-api-access-kfhfp") pod "00bae74c-5262-4d4c-b7e1-08ff7233812a" (UID: "00bae74c-5262-4d4c-b7e1-08ff7233812a"). InnerVolumeSpecName "kube-api-access-kfhfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.509249 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf5bw\" (UniqueName: \"kubernetes.io/projected/c225f7dc-2224-4c32-a888-b9e56d52136f-kube-api-access-cf5bw\") pod \"nova-api-0\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.518087 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-config-data" (OuterVolumeSpecName: "config-data") pod "00bae74c-5262-4d4c-b7e1-08ff7233812a" (UID: "00bae74c-5262-4d4c-b7e1-08ff7233812a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.521225 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00bae74c-5262-4d4c-b7e1-08ff7233812a" (UID: "00bae74c-5262-4d4c-b7e1-08ff7233812a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.558342 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.589265 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfhfp\" (UniqueName: \"kubernetes.io/projected/00bae74c-5262-4d4c-b7e1-08ff7233812a-kube-api-access-kfhfp\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.589305 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:57 crc kubenswrapper[4834]: I0121 16:05:57.589315 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bae74c-5262-4d4c-b7e1-08ff7233812a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.030352 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.136939 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c225f7dc-2224-4c32-a888-b9e56d52136f","Type":"ContainerStarted","Data":"e2c4ac198fb9595cb69fd67b8a389a79623fcff486b08625a62177a7ccdafcaf"} Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.138570 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00bae74c-5262-4d4c-b7e1-08ff7233812a","Type":"ContainerDied","Data":"f3e04c576ea2c5dffdb69022db0f448032a6a8d45f51409c6b275c83dcea9362"} Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.138587 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.138629 4834 scope.go:117] "RemoveContainer" containerID="d19d142a5007d0ff6ef54febc7c9d1bb1c3cee2e42564b087cea76a1f767a952" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.176057 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.194433 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.204213 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:58 crc kubenswrapper[4834]: E0121 16:05:58.204779 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00bae74c-5262-4d4c-b7e1-08ff7233812a" containerName="nova-scheduler-scheduler" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.204799 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="00bae74c-5262-4d4c-b7e1-08ff7233812a" containerName="nova-scheduler-scheduler" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.205038 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="00bae74c-5262-4d4c-b7e1-08ff7233812a" containerName="nova-scheduler-scheduler" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.205850 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.207550 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.213587 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.335314 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00bae74c-5262-4d4c-b7e1-08ff7233812a" path="/var/lib/kubelet/pods/00bae74c-5262-4d4c-b7e1-08ff7233812a/volumes" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.336074 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f7487a7-e1ae-48d4-96db-0d21d3b66ad6" path="/var/lib/kubelet/pods/0f7487a7-e1ae-48d4-96db-0d21d3b66ad6/volumes" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.405283 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.405576 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4qmq\" (UniqueName: \"kubernetes.io/projected/e9fca2aa-81cc-4784-8104-fe7e118c3c17-kube-api-access-n4qmq\") pod \"nova-scheduler-0\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.405818 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-config-data\") pod \"nova-scheduler-0\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.507510 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.507641 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4qmq\" (UniqueName: \"kubernetes.io/projected/e9fca2aa-81cc-4784-8104-fe7e118c3c17-kube-api-access-n4qmq\") pod \"nova-scheduler-0\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.507710 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-config-data\") pod \"nova-scheduler-0\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.511831 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.512425 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-config-data\") pod \"nova-scheduler-0\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.524321 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4qmq\" (UniqueName: \"kubernetes.io/projected/e9fca2aa-81cc-4784-8104-fe7e118c3c17-kube-api-access-n4qmq\") pod \"nova-scheduler-0\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " pod="openstack/nova-scheduler-0" Jan 21 16:05:58 crc kubenswrapper[4834]: I0121 16:05:58.535806 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:05:59 crc kubenswrapper[4834]: I0121 16:05:59.003787 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:05:59 crc kubenswrapper[4834]: I0121 16:05:59.153861 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c225f7dc-2224-4c32-a888-b9e56d52136f","Type":"ContainerStarted","Data":"f0728f9ad2148e12e5d07e7199bd24bdd2a095edc118e405966c253e6cdbcc6a"} Jan 21 16:05:59 crc kubenswrapper[4834]: I0121 16:05:59.153917 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c225f7dc-2224-4c32-a888-b9e56d52136f","Type":"ContainerStarted","Data":"0d8171c55ff9fac2457f149ddb426c7b4ad52b1776a81221c48e8e27702ff13d"} Jan 21 16:05:59 crc kubenswrapper[4834]: I0121 16:05:59.157391 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9fca2aa-81cc-4784-8104-fe7e118c3c17","Type":"ContainerStarted","Data":"4d42505f1d23c9fa75559953652c0a98787d15caf60028624115711097e90ea1"} Jan 21 16:05:59 crc kubenswrapper[4834]: I0121 16:05:59.181526 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.181501796 podStartE2EDuration="2.181501796s" podCreationTimestamp="2026-01-21 16:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:59.173816335 +0000 UTC m=+5705.148165390" watchObservedRunningTime="2026-01-21 16:05:59.181501796 +0000 UTC m=+5705.155850841" Jan 21 16:05:59 crc kubenswrapper[4834]: I0121 16:05:59.324626 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:05:59 crc kubenswrapper[4834]: E0121 16:05:59.325207 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:06:00 crc kubenswrapper[4834]: I0121 16:06:00.171113 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9fca2aa-81cc-4784-8104-fe7e118c3c17","Type":"ContainerStarted","Data":"4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a"} Jan 21 16:06:00 crc kubenswrapper[4834]: I0121 16:06:00.217547 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.217508016 podStartE2EDuration="2.217508016s" podCreationTimestamp="2026-01-21 16:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:00.202146096 +0000 UTC m=+5706.176495141" watchObservedRunningTime="2026-01-21 16:06:00.217508016 +0000 UTC m=+5706.191857111" Jan 21 16:06:00 crc kubenswrapper[4834]: I0121 16:06:00.803663 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:06:00 crc kubenswrapper[4834]: I0121 16:06:00.803765 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:06:03 crc kubenswrapper[4834]: I0121 16:06:03.536490 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:06:05 crc kubenswrapper[4834]: I0121 16:06:05.803684 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:06:05 crc kubenswrapper[4834]: I0121 16:06:05.804142 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:06:06 crc kubenswrapper[4834]: I0121 16:06:06.885159 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:06:06 crc kubenswrapper[4834]: I0121 16:06:06.885159 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:06:07 crc kubenswrapper[4834]: I0121 16:06:07.559432 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:06:07 crc kubenswrapper[4834]: I0121 16:06:07.559516 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:06:08 crc kubenswrapper[4834]: I0121 16:06:08.101607 4834 scope.go:117] "RemoveContainer" containerID="ef5af3d246b352ee98e3abdf35236257301233915ea608d56070fa0ba6d643cc" Jan 21 16:06:08 crc kubenswrapper[4834]: I0121 16:06:08.536261 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:06:08 crc kubenswrapper[4834]: I0121 16:06:08.577389 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:06:08 crc kubenswrapper[4834]: I0121 16:06:08.641297 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:06:08 crc kubenswrapper[4834]: I0121 16:06:08.641845 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:06:09 crc kubenswrapper[4834]: I0121 16:06:09.284624 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:06:14 crc kubenswrapper[4834]: I0121 16:06:14.330893 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:06:14 crc kubenswrapper[4834]: E0121 16:06:14.332672 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:06:15 crc kubenswrapper[4834]: I0121 16:06:15.806113 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:06:15 crc kubenswrapper[4834]: I0121 16:06:15.806371 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:06:15 crc kubenswrapper[4834]: I0121 16:06:15.808972 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:06:15 crc kubenswrapper[4834]: I0121 16:06:15.809913 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:06:17 crc kubenswrapper[4834]: I0121 16:06:17.562596 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:06:17 crc kubenswrapper[4834]: I0121 16:06:17.563274 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:06:17 crc kubenswrapper[4834]: I0121 16:06:17.564484 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:06:17 crc kubenswrapper[4834]: I0121 16:06:17.566046 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.388854 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.392700 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.594658 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86667fc57f-lwd7j"] Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.611273 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86667fc57f-lwd7j"] Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.611393 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.682957 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxc2b\" (UniqueName: \"kubernetes.io/projected/2d868260-9d37-459e-b333-78d9d1f7f1dd-kube-api-access-jxc2b\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.683031 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-dns-svc\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.683065 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.683079 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.683125 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-config\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.784831 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-config\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.784995 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxc2b\" (UniqueName: \"kubernetes.io/projected/2d868260-9d37-459e-b333-78d9d1f7f1dd-kube-api-access-jxc2b\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.785057 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-dns-svc\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.785101 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.785122 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.785941 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-config\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.785949 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-sb\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.786572 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-dns-svc\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.786704 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-nb\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.809897 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxc2b\" (UniqueName: \"kubernetes.io/projected/2d868260-9d37-459e-b333-78d9d1f7f1dd-kube-api-access-jxc2b\") pod \"dnsmasq-dns-86667fc57f-lwd7j\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:18 crc kubenswrapper[4834]: I0121 16:06:18.938078 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:19 crc kubenswrapper[4834]: I0121 16:06:19.429739 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86667fc57f-lwd7j"] Jan 21 16:06:20 crc kubenswrapper[4834]: I0121 16:06:20.406077 4834 generic.go:334] "Generic (PLEG): container finished" podID="2d868260-9d37-459e-b333-78d9d1f7f1dd" containerID="32cae23987f49c8418736bcacd9f057d58490b456247e344b4ac373eb60f7048" exitCode=0 Jan 21 16:06:20 crc kubenswrapper[4834]: I0121 16:06:20.406158 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" event={"ID":"2d868260-9d37-459e-b333-78d9d1f7f1dd","Type":"ContainerDied","Data":"32cae23987f49c8418736bcacd9f057d58490b456247e344b4ac373eb60f7048"} Jan 21 16:06:20 crc kubenswrapper[4834]: I0121 16:06:20.406466 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" event={"ID":"2d868260-9d37-459e-b333-78d9d1f7f1dd","Type":"ContainerStarted","Data":"30a05d7b4c8ebfe09146c987859c2649527e7d7dd27cbd0af83a38e418ad41dd"} Jan 21 16:06:21 crc kubenswrapper[4834]: I0121 16:06:21.417029 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" event={"ID":"2d868260-9d37-459e-b333-78d9d1f7f1dd","Type":"ContainerStarted","Data":"d678245dce90e4be55b7e378d1e432cf606ea3fd6b7a6e04c6064d731d2eaf29"} Jan 21 16:06:21 crc kubenswrapper[4834]: I0121 16:06:21.417485 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:21 crc kubenswrapper[4834]: I0121 16:06:21.440267 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" podStartSLOduration=3.440247337 podStartE2EDuration="3.440247337s" podCreationTimestamp="2026-01-21 16:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:21.435391076 +0000 UTC m=+5727.409740121" watchObservedRunningTime="2026-01-21 16:06:21.440247337 +0000 UTC m=+5727.414596382" Jan 21 16:06:26 crc kubenswrapper[4834]: I0121 16:06:26.325950 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:06:26 crc kubenswrapper[4834]: E0121 16:06:26.327205 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:06:28 crc kubenswrapper[4834]: I0121 16:06:28.941277 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.019971 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d88fc77c-5d4pt"] Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.020225 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" podUID="189e8ec9-d686-452f-8094-9f8c3433a638" containerName="dnsmasq-dns" containerID="cri-o://4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2" gracePeriod=10 Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.499689 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.504868 4834 generic.go:334] "Generic (PLEG): container finished" podID="189e8ec9-d686-452f-8094-9f8c3433a638" containerID="4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2" exitCode=0 Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.504941 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" event={"ID":"189e8ec9-d686-452f-8094-9f8c3433a638","Type":"ContainerDied","Data":"4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2"} Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.504950 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.504977 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" event={"ID":"189e8ec9-d686-452f-8094-9f8c3433a638","Type":"ContainerDied","Data":"54f2d51e50bd5a9245e224d5ee5cd907ab5080721de0b8f59199cd79e5ff56e0"} Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.505001 4834 scope.go:117] "RemoveContainer" containerID="4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.533869 4834 scope.go:117] "RemoveContainer" containerID="e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.570115 4834 scope.go:117] "RemoveContainer" containerID="4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2" Jan 21 16:06:29 crc kubenswrapper[4834]: E0121 16:06:29.570661 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2\": container with ID starting with 4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2 not found: ID does not exist" containerID="4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.570708 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2"} err="failed to get container status \"4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2\": rpc error: code = NotFound desc = could not find container \"4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2\": container with ID starting with 4d6c0d3ca70cfc50b62448cbff85b5dad47e658959933922e384cc42b7c0f4c2 not found: ID does not exist" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.570737 4834 scope.go:117] "RemoveContainer" containerID="e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb" Jan 21 16:06:29 crc kubenswrapper[4834]: E0121 16:06:29.571132 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb\": container with ID starting with e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb not found: ID does not exist" containerID="e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.571180 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb"} err="failed to get container status \"e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb\": rpc error: code = NotFound desc = could not find container \"e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb\": container with ID starting with e0c4a63b390aa3c31251c6d472f7ea9cf69de86a1934e224086c66b5c3d5e1eb not found: ID does not exist" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.603560 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-dns-svc\") pod \"189e8ec9-d686-452f-8094-9f8c3433a638\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.603806 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfqrp\" (UniqueName: \"kubernetes.io/projected/189e8ec9-d686-452f-8094-9f8c3433a638-kube-api-access-rfqrp\") pod \"189e8ec9-d686-452f-8094-9f8c3433a638\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.603856 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-sb\") pod \"189e8ec9-d686-452f-8094-9f8c3433a638\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.603973 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-nb\") pod \"189e8ec9-d686-452f-8094-9f8c3433a638\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.604120 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-config\") pod \"189e8ec9-d686-452f-8094-9f8c3433a638\" (UID: \"189e8ec9-d686-452f-8094-9f8c3433a638\") " Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.618515 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189e8ec9-d686-452f-8094-9f8c3433a638-kube-api-access-rfqrp" (OuterVolumeSpecName: "kube-api-access-rfqrp") pod "189e8ec9-d686-452f-8094-9f8c3433a638" (UID: "189e8ec9-d686-452f-8094-9f8c3433a638"). InnerVolumeSpecName "kube-api-access-rfqrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.661379 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "189e8ec9-d686-452f-8094-9f8c3433a638" (UID: "189e8ec9-d686-452f-8094-9f8c3433a638"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.663069 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "189e8ec9-d686-452f-8094-9f8c3433a638" (UID: "189e8ec9-d686-452f-8094-9f8c3433a638"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.669739 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "189e8ec9-d686-452f-8094-9f8c3433a638" (UID: "189e8ec9-d686-452f-8094-9f8c3433a638"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.673457 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-config" (OuterVolumeSpecName: "config") pod "189e8ec9-d686-452f-8094-9f8c3433a638" (UID: "189e8ec9-d686-452f-8094-9f8c3433a638"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.706720 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfqrp\" (UniqueName: \"kubernetes.io/projected/189e8ec9-d686-452f-8094-9f8c3433a638-kube-api-access-rfqrp\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.706759 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.706768 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.706780 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.706791 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/189e8ec9-d686-452f-8094-9f8c3433a638-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.840823 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d88fc77c-5d4pt"] Jan 21 16:06:29 crc kubenswrapper[4834]: I0121 16:06:29.850400 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66d88fc77c-5d4pt"] Jan 21 16:06:30 crc kubenswrapper[4834]: I0121 16:06:30.335407 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="189e8ec9-d686-452f-8094-9f8c3433a638" path="/var/lib/kubelet/pods/189e8ec9-d686-452f-8094-9f8c3433a638/volumes" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.488421 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-k5ptb"] Jan 21 16:06:31 crc kubenswrapper[4834]: E0121 16:06:31.489370 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189e8ec9-d686-452f-8094-9f8c3433a638" containerName="dnsmasq-dns" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.489388 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="189e8ec9-d686-452f-8094-9f8c3433a638" containerName="dnsmasq-dns" Jan 21 16:06:31 crc kubenswrapper[4834]: E0121 16:06:31.489401 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189e8ec9-d686-452f-8094-9f8c3433a638" containerName="init" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.489409 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="189e8ec9-d686-452f-8094-9f8c3433a638" containerName="init" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.489621 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="189e8ec9-d686-452f-8094-9f8c3433a638" containerName="dnsmasq-dns" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.490409 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.497854 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k5ptb"] Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.570917 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06969108-96f5-4459-807d-a648e2ccb025-operator-scripts\") pod \"cinder-db-create-k5ptb\" (UID: \"06969108-96f5-4459-807d-a648e2ccb025\") " pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.571355 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj6hk\" (UniqueName: \"kubernetes.io/projected/06969108-96f5-4459-807d-a648e2ccb025-kube-api-access-wj6hk\") pod \"cinder-db-create-k5ptb\" (UID: \"06969108-96f5-4459-807d-a648e2ccb025\") " pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.609092 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3044-account-create-update-kvmlb"] Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.610360 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.613425 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.622197 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3044-account-create-update-kvmlb"] Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.673494 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-operator-scripts\") pod \"cinder-3044-account-create-update-kvmlb\" (UID: \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\") " pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.673569 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06969108-96f5-4459-807d-a648e2ccb025-operator-scripts\") pod \"cinder-db-create-k5ptb\" (UID: \"06969108-96f5-4459-807d-a648e2ccb025\") " pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.673625 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9w25\" (UniqueName: \"kubernetes.io/projected/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-kube-api-access-m9w25\") pod \"cinder-3044-account-create-update-kvmlb\" (UID: \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\") " pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.673716 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj6hk\" (UniqueName: \"kubernetes.io/projected/06969108-96f5-4459-807d-a648e2ccb025-kube-api-access-wj6hk\") pod \"cinder-db-create-k5ptb\" (UID: \"06969108-96f5-4459-807d-a648e2ccb025\") " pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.674638 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06969108-96f5-4459-807d-a648e2ccb025-operator-scripts\") pod \"cinder-db-create-k5ptb\" (UID: \"06969108-96f5-4459-807d-a648e2ccb025\") " pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.690081 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj6hk\" (UniqueName: \"kubernetes.io/projected/06969108-96f5-4459-807d-a648e2ccb025-kube-api-access-wj6hk\") pod \"cinder-db-create-k5ptb\" (UID: \"06969108-96f5-4459-807d-a648e2ccb025\") " pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.775954 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-operator-scripts\") pod \"cinder-3044-account-create-update-kvmlb\" (UID: \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\") " pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.776052 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9w25\" (UniqueName: \"kubernetes.io/projected/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-kube-api-access-m9w25\") pod \"cinder-3044-account-create-update-kvmlb\" (UID: \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\") " pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.777348 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-operator-scripts\") pod \"cinder-3044-account-create-update-kvmlb\" (UID: \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\") " pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.795447 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9w25\" (UniqueName: \"kubernetes.io/projected/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-kube-api-access-m9w25\") pod \"cinder-3044-account-create-update-kvmlb\" (UID: \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\") " pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.848868 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:31 crc kubenswrapper[4834]: I0121 16:06:31.932835 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:32 crc kubenswrapper[4834]: I0121 16:06:32.314711 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k5ptb"] Jan 21 16:06:32 crc kubenswrapper[4834]: I0121 16:06:32.448883 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3044-account-create-update-kvmlb"] Jan 21 16:06:32 crc kubenswrapper[4834]: W0121 16:06:32.449555 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda879ebb7_ee92_4e94_bf4d_17c2549b9c60.slice/crio-74fd6533ab9aa8e1b895e17873c6e8803f9f624d769aff5083c7ae0bf2adaab5 WatchSource:0}: Error finding container 74fd6533ab9aa8e1b895e17873c6e8803f9f624d769aff5083c7ae0bf2adaab5: Status 404 returned error can't find the container with id 74fd6533ab9aa8e1b895e17873c6e8803f9f624d769aff5083c7ae0bf2adaab5 Jan 21 16:06:32 crc kubenswrapper[4834]: I0121 16:06:32.546213 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3044-account-create-update-kvmlb" event={"ID":"a879ebb7-ee92-4e94-bf4d-17c2549b9c60","Type":"ContainerStarted","Data":"74fd6533ab9aa8e1b895e17873c6e8803f9f624d769aff5083c7ae0bf2adaab5"} Jan 21 16:06:32 crc kubenswrapper[4834]: I0121 16:06:32.548267 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5ptb" event={"ID":"06969108-96f5-4459-807d-a648e2ccb025","Type":"ContainerStarted","Data":"e7d1ef14bd51a88a076f84c81e502a139d6d1591aa60ec72f6c1f2d16bd898a1"} Jan 21 16:06:32 crc kubenswrapper[4834]: I0121 16:06:32.548288 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5ptb" event={"ID":"06969108-96f5-4459-807d-a648e2ccb025","Type":"ContainerStarted","Data":"edd3b4197457c5bb2e62568ca4209e79f3e7ff6132b005c1f876ebbe633d6e17"} Jan 21 16:06:32 crc kubenswrapper[4834]: I0121 16:06:32.571149 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-k5ptb" podStartSLOduration=1.571128071 podStartE2EDuration="1.571128071s" podCreationTimestamp="2026-01-21 16:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:32.564169773 +0000 UTC m=+5738.538518828" watchObservedRunningTime="2026-01-21 16:06:32.571128071 +0000 UTC m=+5738.545477116" Jan 21 16:06:33 crc kubenswrapper[4834]: I0121 16:06:33.566557 4834 generic.go:334] "Generic (PLEG): container finished" podID="06969108-96f5-4459-807d-a648e2ccb025" containerID="e7d1ef14bd51a88a076f84c81e502a139d6d1591aa60ec72f6c1f2d16bd898a1" exitCode=0 Jan 21 16:06:33 crc kubenswrapper[4834]: I0121 16:06:33.566663 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5ptb" event={"ID":"06969108-96f5-4459-807d-a648e2ccb025","Type":"ContainerDied","Data":"e7d1ef14bd51a88a076f84c81e502a139d6d1591aa60ec72f6c1f2d16bd898a1"} Jan 21 16:06:33 crc kubenswrapper[4834]: I0121 16:06:33.571105 4834 generic.go:334] "Generic (PLEG): container finished" podID="a879ebb7-ee92-4e94-bf4d-17c2549b9c60" containerID="9e7f8d0fb02a18b039ccdce3c4e4ff8757d163efacb7a7a3f721330a94c92a4b" exitCode=0 Jan 21 16:06:33 crc kubenswrapper[4834]: I0121 16:06:33.571170 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3044-account-create-update-kvmlb" event={"ID":"a879ebb7-ee92-4e94-bf4d-17c2549b9c60","Type":"ContainerDied","Data":"9e7f8d0fb02a18b039ccdce3c4e4ff8757d163efacb7a7a3f721330a94c92a4b"} Jan 21 16:06:34 crc kubenswrapper[4834]: I0121 16:06:34.399378 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-66d88fc77c-5d4pt" podUID="189e8ec9-d686-452f-8094-9f8c3433a638" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.61:5353: i/o timeout" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.100279 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.104385 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.114817 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-operator-scripts\") pod \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\" (UID: \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\") " Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.115750 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9w25\" (UniqueName: \"kubernetes.io/projected/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-kube-api-access-m9w25\") pod \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\" (UID: \"a879ebb7-ee92-4e94-bf4d-17c2549b9c60\") " Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.115924 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj6hk\" (UniqueName: \"kubernetes.io/projected/06969108-96f5-4459-807d-a648e2ccb025-kube-api-access-wj6hk\") pod \"06969108-96f5-4459-807d-a648e2ccb025\" (UID: \"06969108-96f5-4459-807d-a648e2ccb025\") " Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.116154 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06969108-96f5-4459-807d-a648e2ccb025-operator-scripts\") pod \"06969108-96f5-4459-807d-a648e2ccb025\" (UID: \"06969108-96f5-4459-807d-a648e2ccb025\") " Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.117706 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a879ebb7-ee92-4e94-bf4d-17c2549b9c60" (UID: "a879ebb7-ee92-4e94-bf4d-17c2549b9c60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.150812 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06969108-96f5-4459-807d-a648e2ccb025-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06969108-96f5-4459-807d-a648e2ccb025" (UID: "06969108-96f5-4459-807d-a648e2ccb025"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.152429 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-kube-api-access-m9w25" (OuterVolumeSpecName: "kube-api-access-m9w25") pod "a879ebb7-ee92-4e94-bf4d-17c2549b9c60" (UID: "a879ebb7-ee92-4e94-bf4d-17c2549b9c60"). InnerVolumeSpecName "kube-api-access-m9w25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.153348 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06969108-96f5-4459-807d-a648e2ccb025-kube-api-access-wj6hk" (OuterVolumeSpecName: "kube-api-access-wj6hk") pod "06969108-96f5-4459-807d-a648e2ccb025" (UID: "06969108-96f5-4459-807d-a648e2ccb025"). InnerVolumeSpecName "kube-api-access-wj6hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.217984 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.218016 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9w25\" (UniqueName: \"kubernetes.io/projected/a879ebb7-ee92-4e94-bf4d-17c2549b9c60-kube-api-access-m9w25\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.218028 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj6hk\" (UniqueName: \"kubernetes.io/projected/06969108-96f5-4459-807d-a648e2ccb025-kube-api-access-wj6hk\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.218037 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06969108-96f5-4459-807d-a648e2ccb025-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.590089 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5ptb" event={"ID":"06969108-96f5-4459-807d-a648e2ccb025","Type":"ContainerDied","Data":"edd3b4197457c5bb2e62568ca4209e79f3e7ff6132b005c1f876ebbe633d6e17"} Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.590321 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edd3b4197457c5bb2e62568ca4209e79f3e7ff6132b005c1f876ebbe633d6e17" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.590611 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5ptb" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.592308 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3044-account-create-update-kvmlb" event={"ID":"a879ebb7-ee92-4e94-bf4d-17c2549b9c60","Type":"ContainerDied","Data":"74fd6533ab9aa8e1b895e17873c6e8803f9f624d769aff5083c7ae0bf2adaab5"} Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.592350 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74fd6533ab9aa8e1b895e17873c6e8803f9f624d769aff5083c7ae0bf2adaab5" Jan 21 16:06:35 crc kubenswrapper[4834]: I0121 16:06:35.592382 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3044-account-create-update-kvmlb" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.855633 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dds77"] Jan 21 16:06:36 crc kubenswrapper[4834]: E0121 16:06:36.856209 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06969108-96f5-4459-807d-a648e2ccb025" containerName="mariadb-database-create" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.856226 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="06969108-96f5-4459-807d-a648e2ccb025" containerName="mariadb-database-create" Jan 21 16:06:36 crc kubenswrapper[4834]: E0121 16:06:36.856261 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a879ebb7-ee92-4e94-bf4d-17c2549b9c60" containerName="mariadb-account-create-update" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.856268 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a879ebb7-ee92-4e94-bf4d-17c2549b9c60" containerName="mariadb-account-create-update" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.856442 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="06969108-96f5-4459-807d-a648e2ccb025" containerName="mariadb-database-create" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.856466 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a879ebb7-ee92-4e94-bf4d-17c2549b9c60" containerName="mariadb-account-create-update" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.857266 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.862633 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.862778 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-62nq6" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.862845 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.887039 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dds77"] Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.908816 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-scripts\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.908918 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-combined-ca-bundle\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.908979 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdz47\" (UniqueName: \"kubernetes.io/projected/13e195d7-9f94-44cd-8b1f-74631ce95c58-kube-api-access-vdz47\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.909044 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-config-data\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.909145 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-db-sync-config-data\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:36 crc kubenswrapper[4834]: I0121 16:06:36.909170 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13e195d7-9f94-44cd-8b1f-74631ce95c58-etc-machine-id\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.010468 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-scripts\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.010529 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-combined-ca-bundle\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.010582 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdz47\" (UniqueName: \"kubernetes.io/projected/13e195d7-9f94-44cd-8b1f-74631ce95c58-kube-api-access-vdz47\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.011408 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-config-data\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.011471 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-db-sync-config-data\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.011487 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13e195d7-9f94-44cd-8b1f-74631ce95c58-etc-machine-id\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.011595 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13e195d7-9f94-44cd-8b1f-74631ce95c58-etc-machine-id\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.016147 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-scripts\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.016782 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-config-data\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.017329 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-db-sync-config-data\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.024780 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-combined-ca-bundle\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.027226 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdz47\" (UniqueName: \"kubernetes.io/projected/13e195d7-9f94-44cd-8b1f-74631ce95c58-kube-api-access-vdz47\") pod \"cinder-db-sync-dds77\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.182097 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:37 crc kubenswrapper[4834]: I0121 16:06:37.719910 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dds77"] Jan 21 16:06:37 crc kubenswrapper[4834]: W0121 16:06:37.748387 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13e195d7_9f94_44cd_8b1f_74631ce95c58.slice/crio-a77eb168dc1938defff502441dc1fd3d95a1fc2ce23c8909870cd28232aef95f WatchSource:0}: Error finding container a77eb168dc1938defff502441dc1fd3d95a1fc2ce23c8909870cd28232aef95f: Status 404 returned error can't find the container with id a77eb168dc1938defff502441dc1fd3d95a1fc2ce23c8909870cd28232aef95f Jan 21 16:06:38 crc kubenswrapper[4834]: I0121 16:06:38.732580 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dds77" event={"ID":"13e195d7-9f94-44cd-8b1f-74631ce95c58","Type":"ContainerStarted","Data":"30ac3c92c3e7d84072de5cf78d8266e904971da111ed6fdd8f0d37efcd052300"} Jan 21 16:06:38 crc kubenswrapper[4834]: I0121 16:06:38.733003 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dds77" event={"ID":"13e195d7-9f94-44cd-8b1f-74631ce95c58","Type":"ContainerStarted","Data":"a77eb168dc1938defff502441dc1fd3d95a1fc2ce23c8909870cd28232aef95f"} Jan 21 16:06:38 crc kubenswrapper[4834]: I0121 16:06:38.757082 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dds77" podStartSLOduration=2.75706534 podStartE2EDuration="2.75706534s" podCreationTimestamp="2026-01-21 16:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:38.755494152 +0000 UTC m=+5744.729843197" watchObservedRunningTime="2026-01-21 16:06:38.75706534 +0000 UTC m=+5744.731414385" Jan 21 16:06:41 crc kubenswrapper[4834]: I0121 16:06:41.324594 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:06:41 crc kubenswrapper[4834]: E0121 16:06:41.325265 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:06:41 crc kubenswrapper[4834]: I0121 16:06:41.874003 4834 generic.go:334] "Generic (PLEG): container finished" podID="13e195d7-9f94-44cd-8b1f-74631ce95c58" containerID="30ac3c92c3e7d84072de5cf78d8266e904971da111ed6fdd8f0d37efcd052300" exitCode=0 Jan 21 16:06:41 crc kubenswrapper[4834]: I0121 16:06:41.874106 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dds77" event={"ID":"13e195d7-9f94-44cd-8b1f-74631ce95c58","Type":"ContainerDied","Data":"30ac3c92c3e7d84072de5cf78d8266e904971da111ed6fdd8f0d37efcd052300"} Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.290279 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.430901 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdz47\" (UniqueName: \"kubernetes.io/projected/13e195d7-9f94-44cd-8b1f-74631ce95c58-kube-api-access-vdz47\") pod \"13e195d7-9f94-44cd-8b1f-74631ce95c58\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.431228 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13e195d7-9f94-44cd-8b1f-74631ce95c58-etc-machine-id\") pod \"13e195d7-9f94-44cd-8b1f-74631ce95c58\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.431330 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-scripts\") pod \"13e195d7-9f94-44cd-8b1f-74631ce95c58\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.431398 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-config-data\") pod \"13e195d7-9f94-44cd-8b1f-74631ce95c58\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.431452 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-db-sync-config-data\") pod \"13e195d7-9f94-44cd-8b1f-74631ce95c58\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.431481 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-combined-ca-bundle\") pod \"13e195d7-9f94-44cd-8b1f-74631ce95c58\" (UID: \"13e195d7-9f94-44cd-8b1f-74631ce95c58\") " Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.436439 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13e195d7-9f94-44cd-8b1f-74631ce95c58-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "13e195d7-9f94-44cd-8b1f-74631ce95c58" (UID: "13e195d7-9f94-44cd-8b1f-74631ce95c58"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.447888 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "13e195d7-9f94-44cd-8b1f-74631ce95c58" (UID: "13e195d7-9f94-44cd-8b1f-74631ce95c58"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.458267 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e195d7-9f94-44cd-8b1f-74631ce95c58-kube-api-access-vdz47" (OuterVolumeSpecName: "kube-api-access-vdz47") pod "13e195d7-9f94-44cd-8b1f-74631ce95c58" (UID: "13e195d7-9f94-44cd-8b1f-74631ce95c58"). InnerVolumeSpecName "kube-api-access-vdz47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.465075 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-scripts" (OuterVolumeSpecName: "scripts") pod "13e195d7-9f94-44cd-8b1f-74631ce95c58" (UID: "13e195d7-9f94-44cd-8b1f-74631ce95c58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.488149 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13e195d7-9f94-44cd-8b1f-74631ce95c58" (UID: "13e195d7-9f94-44cd-8b1f-74631ce95c58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.507698 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-config-data" (OuterVolumeSpecName: "config-data") pod "13e195d7-9f94-44cd-8b1f-74631ce95c58" (UID: "13e195d7-9f94-44cd-8b1f-74631ce95c58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.540653 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.540690 4834 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.540704 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.540715 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdz47\" (UniqueName: \"kubernetes.io/projected/13e195d7-9f94-44cd-8b1f-74631ce95c58-kube-api-access-vdz47\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.540725 4834 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/13e195d7-9f94-44cd-8b1f-74631ce95c58-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.540733 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13e195d7-9f94-44cd-8b1f-74631ce95c58-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.894043 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dds77" event={"ID":"13e195d7-9f94-44cd-8b1f-74631ce95c58","Type":"ContainerDied","Data":"a77eb168dc1938defff502441dc1fd3d95a1fc2ce23c8909870cd28232aef95f"} Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.894086 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a77eb168dc1938defff502441dc1fd3d95a1fc2ce23c8909870cd28232aef95f" Jan 21 16:06:43 crc kubenswrapper[4834]: I0121 16:06:43.894133 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dds77" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.241357 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d5c47fd59-dkcb5"] Jan 21 16:06:44 crc kubenswrapper[4834]: E0121 16:06:44.241865 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e195d7-9f94-44cd-8b1f-74631ce95c58" containerName="cinder-db-sync" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.241883 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e195d7-9f94-44cd-8b1f-74631ce95c58" containerName="cinder-db-sync" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.242162 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="13e195d7-9f94-44cd-8b1f-74631ce95c58" containerName="cinder-db-sync" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.243252 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.266421 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5c47fd59-dkcb5"] Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.356860 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.357273 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-config\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.357673 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jz9g\" (UniqueName: \"kubernetes.io/projected/da2ce2de-e312-4435-98e7-bd7e410a2f15-kube-api-access-7jz9g\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.357887 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.357914 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-dns-svc\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.447535 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.449168 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.454527 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.455234 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.455457 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.455914 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-62nq6" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.460825 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.460906 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-dns-svc\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.460984 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.461038 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-config\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.461246 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jz9g\" (UniqueName: \"kubernetes.io/projected/da2ce2de-e312-4435-98e7-bd7e410a2f15-kube-api-access-7jz9g\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.462654 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-sb\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.475626 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-nb\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.476499 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-dns-svc\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.477800 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-config\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.481134 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.503295 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jz9g\" (UniqueName: \"kubernetes.io/projected/da2ce2de-e312-4435-98e7-bd7e410a2f15-kube-api-access-7jz9g\") pod \"dnsmasq-dns-7d5c47fd59-dkcb5\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.563600 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.563650 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0df6aeac-666e-4332-83f1-98702e74bf83-logs\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.563711 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-scripts\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.563756 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df6aeac-666e-4332-83f1-98702e74bf83-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.563775 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.563800 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxwt\" (UniqueName: \"kubernetes.io/projected/0df6aeac-666e-4332-83f1-98702e74bf83-kube-api-access-flxwt\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.563832 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data-custom\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.568422 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.665591 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df6aeac-666e-4332-83f1-98702e74bf83-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.665642 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.665681 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxwt\" (UniqueName: \"kubernetes.io/projected/0df6aeac-666e-4332-83f1-98702e74bf83-kube-api-access-flxwt\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.665739 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data-custom\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.665806 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.665848 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0df6aeac-666e-4332-83f1-98702e74bf83-logs\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.665915 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-scripts\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.667091 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df6aeac-666e-4332-83f1-98702e74bf83-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.668364 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0df6aeac-666e-4332-83f1-98702e74bf83-logs\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.674626 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data-custom\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.677225 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.677876 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.686387 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-scripts\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.690667 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxwt\" (UniqueName: \"kubernetes.io/projected/0df6aeac-666e-4332-83f1-98702e74bf83-kube-api-access-flxwt\") pod \"cinder-api-0\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " pod="openstack/cinder-api-0" Jan 21 16:06:44 crc kubenswrapper[4834]: I0121 16:06:44.774477 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:06:45 crc kubenswrapper[4834]: I0121 16:06:45.124021 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d5c47fd59-dkcb5"] Jan 21 16:06:45 crc kubenswrapper[4834]: I0121 16:06:45.922472 4834 generic.go:334] "Generic (PLEG): container finished" podID="da2ce2de-e312-4435-98e7-bd7e410a2f15" containerID="1ae68fa6eab0fbd7c07431c09211da10725bf9b2a41a58b87f7cbd0841048b2c" exitCode=0 Jan 21 16:06:45 crc kubenswrapper[4834]: I0121 16:06:45.924133 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" event={"ID":"da2ce2de-e312-4435-98e7-bd7e410a2f15","Type":"ContainerDied","Data":"1ae68fa6eab0fbd7c07431c09211da10725bf9b2a41a58b87f7cbd0841048b2c"} Jan 21 16:06:45 crc kubenswrapper[4834]: I0121 16:06:45.924285 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" event={"ID":"da2ce2de-e312-4435-98e7-bd7e410a2f15","Type":"ContainerStarted","Data":"53292b75d7d7b8244fe564f178f5f005bc295c9d16f8c62345a4fa4293f1ed01"} Jan 21 16:06:46 crc kubenswrapper[4834]: I0121 16:06:46.153586 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:06:46 crc kubenswrapper[4834]: W0121 16:06:46.166662 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0df6aeac_666e_4332_83f1_98702e74bf83.slice/crio-f18a2384102344622a154ed66757ce1c54b88111de726643d15724596c9e9308 WatchSource:0}: Error finding container f18a2384102344622a154ed66757ce1c54b88111de726643d15724596c9e9308: Status 404 returned error can't find the container with id f18a2384102344622a154ed66757ce1c54b88111de726643d15724596c9e9308 Jan 21 16:06:46 crc kubenswrapper[4834]: I0121 16:06:46.946282 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" event={"ID":"da2ce2de-e312-4435-98e7-bd7e410a2f15","Type":"ContainerStarted","Data":"32a695321f1dd740a129b48beb244bd26e742e68ef38ce8642e3ab77acad2c6f"} Jan 21 16:06:46 crc kubenswrapper[4834]: I0121 16:06:46.948055 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:46 crc kubenswrapper[4834]: I0121 16:06:46.948192 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0df6aeac-666e-4332-83f1-98702e74bf83","Type":"ContainerStarted","Data":"9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6"} Jan 21 16:06:46 crc kubenswrapper[4834]: I0121 16:06:46.948272 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0df6aeac-666e-4332-83f1-98702e74bf83","Type":"ContainerStarted","Data":"f18a2384102344622a154ed66757ce1c54b88111de726643d15724596c9e9308"} Jan 21 16:06:47 crc kubenswrapper[4834]: I0121 16:06:47.972744 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0df6aeac-666e-4332-83f1-98702e74bf83","Type":"ContainerStarted","Data":"f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301"} Jan 21 16:06:48 crc kubenswrapper[4834]: I0121 16:06:48.005280 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.005250742 podStartE2EDuration="4.005250742s" podCreationTimestamp="2026-01-21 16:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:47.994322831 +0000 UTC m=+5753.968671876" watchObservedRunningTime="2026-01-21 16:06:48.005250742 +0000 UTC m=+5753.979599807" Jan 21 16:06:48 crc kubenswrapper[4834]: I0121 16:06:48.007590 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" podStartSLOduration=4.007575435 podStartE2EDuration="4.007575435s" podCreationTimestamp="2026-01-21 16:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:46.974436074 +0000 UTC m=+5752.948785139" watchObservedRunningTime="2026-01-21 16:06:48.007575435 +0000 UTC m=+5753.981924500" Jan 21 16:06:48 crc kubenswrapper[4834]: I0121 16:06:48.981170 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:06:54 crc kubenswrapper[4834]: I0121 16:06:54.570115 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:06:54 crc kubenswrapper[4834]: I0121 16:06:54.650170 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86667fc57f-lwd7j"] Jan 21 16:06:54 crc kubenswrapper[4834]: I0121 16:06:54.650428 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" podUID="2d868260-9d37-459e-b333-78d9d1f7f1dd" containerName="dnsmasq-dns" containerID="cri-o://d678245dce90e4be55b7e378d1e432cf606ea3fd6b7a6e04c6064d731d2eaf29" gracePeriod=10 Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.162969 4834 generic.go:334] "Generic (PLEG): container finished" podID="2d868260-9d37-459e-b333-78d9d1f7f1dd" containerID="d678245dce90e4be55b7e378d1e432cf606ea3fd6b7a6e04c6064d731d2eaf29" exitCode=0 Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.163254 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" event={"ID":"2d868260-9d37-459e-b333-78d9d1f7f1dd","Type":"ContainerDied","Data":"d678245dce90e4be55b7e378d1e432cf606ea3fd6b7a6e04c6064d731d2eaf29"} Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.163396 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" event={"ID":"2d868260-9d37-459e-b333-78d9d1f7f1dd","Type":"ContainerDied","Data":"30a05d7b4c8ebfe09146c987859c2649527e7d7dd27cbd0af83a38e418ad41dd"} Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.163413 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30a05d7b4c8ebfe09146c987859c2649527e7d7dd27cbd0af83a38e418ad41dd" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.241586 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.328564 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:06:55 crc kubenswrapper[4834]: E0121 16:06:55.328989 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.388857 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-nb\") pod \"2d868260-9d37-459e-b333-78d9d1f7f1dd\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.389367 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxc2b\" (UniqueName: \"kubernetes.io/projected/2d868260-9d37-459e-b333-78d9d1f7f1dd-kube-api-access-jxc2b\") pod \"2d868260-9d37-459e-b333-78d9d1f7f1dd\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.389581 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-sb\") pod \"2d868260-9d37-459e-b333-78d9d1f7f1dd\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.391183 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-config\") pod \"2d868260-9d37-459e-b333-78d9d1f7f1dd\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.391482 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-dns-svc\") pod \"2d868260-9d37-459e-b333-78d9d1f7f1dd\" (UID: \"2d868260-9d37-459e-b333-78d9d1f7f1dd\") " Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.591288 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d868260-9d37-459e-b333-78d9d1f7f1dd-kube-api-access-jxc2b" (OuterVolumeSpecName: "kube-api-access-jxc2b") pod "2d868260-9d37-459e-b333-78d9d1f7f1dd" (UID: "2d868260-9d37-459e-b333-78d9d1f7f1dd"). InnerVolumeSpecName "kube-api-access-jxc2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.610956 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d868260-9d37-459e-b333-78d9d1f7f1dd" (UID: "2d868260-9d37-459e-b333-78d9d1f7f1dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.629716 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxc2b\" (UniqueName: \"kubernetes.io/projected/2d868260-9d37-459e-b333-78d9d1f7f1dd-kube-api-access-jxc2b\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.629765 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.629681 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d868260-9d37-459e-b333-78d9d1f7f1dd" (UID: "2d868260-9d37-459e-b333-78d9d1f7f1dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.667558 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-config" (OuterVolumeSpecName: "config") pod "2d868260-9d37-459e-b333-78d9d1f7f1dd" (UID: "2d868260-9d37-459e-b333-78d9d1f7f1dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.693764 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d868260-9d37-459e-b333-78d9d1f7f1dd" (UID: "2d868260-9d37-459e-b333-78d9d1f7f1dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.732604 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.733050 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:55 crc kubenswrapper[4834]: I0121 16:06:55.733119 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d868260-9d37-459e-b333-78d9d1f7f1dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.171460 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86667fc57f-lwd7j" Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.214375 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86667fc57f-lwd7j"] Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.231798 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86667fc57f-lwd7j"] Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.335845 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d868260-9d37-459e-b333-78d9d1f7f1dd" path="/var/lib/kubelet/pods/2d868260-9d37-459e-b333-78d9d1f7f1dd/volumes" Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.640112 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.640351 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="72f890ac-74e2-4a65-abb3-1383e236e6a9" containerName="nova-cell1-conductor-conductor" containerID="cri-o://1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b" gracePeriod=30 Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.648105 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.648395 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="642eadf4-e07e-4a88-9f26-195713f66f79" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18" gracePeriod=30 Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.659998 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.660287 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e9fca2aa-81cc-4784-8104-fe7e118c3c17" containerName="nova-scheduler-scheduler" containerID="cri-o://4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a" gracePeriod=30 Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.674548 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.674872 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-log" containerID="cri-o://0d8171c55ff9fac2457f149ddb426c7b4ad52b1776a81221c48e8e27702ff13d" gracePeriod=30 Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.675330 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-api" containerID="cri-o://f0728f9ad2148e12e5d07e7199bd24bdd2a095edc118e405966c253e6cdbcc6a" gracePeriod=30 Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.683350 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.683586 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-log" containerID="cri-o://ef4fe6c7727b30f3be31cb7eff75f90a294ec8656a5a885e23cd7ac8569894f5" gracePeriod=30 Jan 21 16:06:56 crc kubenswrapper[4834]: I0121 16:06:56.684123 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-metadata" containerID="cri-o://a8dae434e29611ebb5a179b8059457be0063e1e5074803392d962ace277d0b2c" gracePeriod=30 Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.187268 4834 generic.go:334] "Generic (PLEG): container finished" podID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerID="ef4fe6c7727b30f3be31cb7eff75f90a294ec8656a5a885e23cd7ac8569894f5" exitCode=143 Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.187388 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abf7ce0f-5d02-4da5-b659-05af1350a7f1","Type":"ContainerDied","Data":"ef4fe6c7727b30f3be31cb7eff75f90a294ec8656a5a885e23cd7ac8569894f5"} Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.189969 4834 generic.go:334] "Generic (PLEG): container finished" podID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerID="0d8171c55ff9fac2457f149ddb426c7b4ad52b1776a81221c48e8e27702ff13d" exitCode=143 Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.190044 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c225f7dc-2224-4c32-a888-b9e56d52136f","Type":"ContainerDied","Data":"0d8171c55ff9fac2457f149ddb426c7b4ad52b1776a81221c48e8e27702ff13d"} Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.472124 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.780417 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.894276 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-combined-ca-bundle\") pod \"642eadf4-e07e-4a88-9f26-195713f66f79\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.894318 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cgvl\" (UniqueName: \"kubernetes.io/projected/642eadf4-e07e-4a88-9f26-195713f66f79-kube-api-access-7cgvl\") pod \"642eadf4-e07e-4a88-9f26-195713f66f79\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.894401 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-config-data\") pod \"642eadf4-e07e-4a88-9f26-195713f66f79\" (UID: \"642eadf4-e07e-4a88-9f26-195713f66f79\") " Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.927403 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/642eadf4-e07e-4a88-9f26-195713f66f79-kube-api-access-7cgvl" (OuterVolumeSpecName: "kube-api-access-7cgvl") pod "642eadf4-e07e-4a88-9f26-195713f66f79" (UID: "642eadf4-e07e-4a88-9f26-195713f66f79"). InnerVolumeSpecName "kube-api-access-7cgvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.933286 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-config-data" (OuterVolumeSpecName: "config-data") pod "642eadf4-e07e-4a88-9f26-195713f66f79" (UID: "642eadf4-e07e-4a88-9f26-195713f66f79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:57 crc kubenswrapper[4834]: I0121 16:06:57.970212 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "642eadf4-e07e-4a88-9f26-195713f66f79" (UID: "642eadf4-e07e-4a88-9f26-195713f66f79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.003460 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.003498 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cgvl\" (UniqueName: \"kubernetes.io/projected/642eadf4-e07e-4a88-9f26-195713f66f79-kube-api-access-7cgvl\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.003508 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642eadf4-e07e-4a88-9f26-195713f66f79-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.140679 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.207378 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-combined-ca-bundle\") pod \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.207473 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4qmq\" (UniqueName: \"kubernetes.io/projected/e9fca2aa-81cc-4784-8104-fe7e118c3c17-kube-api-access-n4qmq\") pod \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.207578 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-config-data\") pod \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\" (UID: \"e9fca2aa-81cc-4784-8104-fe7e118c3c17\") " Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.213025 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9fca2aa-81cc-4784-8104-fe7e118c3c17-kube-api-access-n4qmq" (OuterVolumeSpecName: "kube-api-access-n4qmq") pod "e9fca2aa-81cc-4784-8104-fe7e118c3c17" (UID: "e9fca2aa-81cc-4784-8104-fe7e118c3c17"). InnerVolumeSpecName "kube-api-access-n4qmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.213329 4834 generic.go:334] "Generic (PLEG): container finished" podID="e9fca2aa-81cc-4784-8104-fe7e118c3c17" containerID="4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a" exitCode=0 Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.213446 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9fca2aa-81cc-4784-8104-fe7e118c3c17","Type":"ContainerDied","Data":"4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a"} Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.213490 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9fca2aa-81cc-4784-8104-fe7e118c3c17","Type":"ContainerDied","Data":"4d42505f1d23c9fa75559953652c0a98787d15caf60028624115711097e90ea1"} Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.213526 4834 scope.go:117] "RemoveContainer" containerID="4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.214102 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.218205 4834 generic.go:334] "Generic (PLEG): container finished" podID="642eadf4-e07e-4a88-9f26-195713f66f79" containerID="3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18" exitCode=0 Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.218399 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"642eadf4-e07e-4a88-9f26-195713f66f79","Type":"ContainerDied","Data":"3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18"} Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.223283 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"642eadf4-e07e-4a88-9f26-195713f66f79","Type":"ContainerDied","Data":"3980055a1ad2da5e1fabea7d0a1b6bbb0307fbc2f3fdda0861b8a2ec2f7a41db"} Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.219769 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.239855 4834 scope.go:117] "RemoveContainer" containerID="4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a" Jan 21 16:06:58 crc kubenswrapper[4834]: E0121 16:06:58.244917 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a\": container with ID starting with 4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a not found: ID does not exist" containerID="4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.244993 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a"} err="failed to get container status \"4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a\": rpc error: code = NotFound desc = could not find container \"4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a\": container with ID starting with 4dae158e380aa022b03004b93003de67cd41aa7986d53492d7cd59f1a0bca04a not found: ID does not exist" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.245042 4834 scope.go:117] "RemoveContainer" containerID="3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.247856 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9fca2aa-81cc-4784-8104-fe7e118c3c17" (UID: "e9fca2aa-81cc-4784-8104-fe7e118c3c17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.262120 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-config-data" (OuterVolumeSpecName: "config-data") pod "e9fca2aa-81cc-4784-8104-fe7e118c3c17" (UID: "e9fca2aa-81cc-4784-8104-fe7e118c3c17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.286479 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.291538 4834 scope.go:117] "RemoveContainer" containerID="3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18" Jan 21 16:06:58 crc kubenswrapper[4834]: E0121 16:06:58.291935 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18\": container with ID starting with 3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18 not found: ID does not exist" containerID="3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.291974 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18"} err="failed to get container status \"3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18\": rpc error: code = NotFound desc = could not find container \"3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18\": container with ID starting with 3b5f138b7464ff4dd3a2ca3a1b7e286efb79376c7fa96d89d9903b00f8ffee18 not found: ID does not exist" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.309791 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.309831 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9fca2aa-81cc-4784-8104-fe7e118c3c17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.309844 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4qmq\" (UniqueName: \"kubernetes.io/projected/e9fca2aa-81cc-4784-8104-fe7e118c3c17-kube-api-access-n4qmq\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.351721 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.362011 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:06:58 crc kubenswrapper[4834]: E0121 16:06:58.362502 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d868260-9d37-459e-b333-78d9d1f7f1dd" containerName="dnsmasq-dns" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.362526 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d868260-9d37-459e-b333-78d9d1f7f1dd" containerName="dnsmasq-dns" Jan 21 16:06:58 crc kubenswrapper[4834]: E0121 16:06:58.362569 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642eadf4-e07e-4a88-9f26-195713f66f79" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.362578 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="642eadf4-e07e-4a88-9f26-195713f66f79" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:06:58 crc kubenswrapper[4834]: E0121 16:06:58.362601 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fca2aa-81cc-4784-8104-fe7e118c3c17" containerName="nova-scheduler-scheduler" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.362608 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fca2aa-81cc-4784-8104-fe7e118c3c17" containerName="nova-scheduler-scheduler" Jan 21 16:06:58 crc kubenswrapper[4834]: E0121 16:06:58.362635 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d868260-9d37-459e-b333-78d9d1f7f1dd" containerName="init" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.362642 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d868260-9d37-459e-b333-78d9d1f7f1dd" containerName="init" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.362814 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d868260-9d37-459e-b333-78d9d1f7f1dd" containerName="dnsmasq-dns" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.362833 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9fca2aa-81cc-4784-8104-fe7e118c3c17" containerName="nova-scheduler-scheduler" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.362846 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="642eadf4-e07e-4a88-9f26-195713f66f79" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.363545 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.366841 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.373085 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.413461 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv42b\" (UniqueName: \"kubernetes.io/projected/84894834-5968-45e4-af2d-e13a2539d13e-kube-api-access-qv42b\") pod \"nova-cell1-novncproxy-0\" (UID: \"84894834-5968-45e4-af2d-e13a2539d13e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.413736 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84894834-5968-45e4-af2d-e13a2539d13e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84894834-5968-45e4-af2d-e13a2539d13e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.414269 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84894834-5968-45e4-af2d-e13a2539d13e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84894834-5968-45e4-af2d-e13a2539d13e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.516967 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84894834-5968-45e4-af2d-e13a2539d13e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84894834-5968-45e4-af2d-e13a2539d13e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.517786 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84894834-5968-45e4-af2d-e13a2539d13e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84894834-5968-45e4-af2d-e13a2539d13e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.518040 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv42b\" (UniqueName: \"kubernetes.io/projected/84894834-5968-45e4-af2d-e13a2539d13e-kube-api-access-qv42b\") pod \"nova-cell1-novncproxy-0\" (UID: \"84894834-5968-45e4-af2d-e13a2539d13e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.521479 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84894834-5968-45e4-af2d-e13a2539d13e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84894834-5968-45e4-af2d-e13a2539d13e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.521520 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84894834-5968-45e4-af2d-e13a2539d13e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84894834-5968-45e4-af2d-e13a2539d13e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.539008 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv42b\" (UniqueName: \"kubernetes.io/projected/84894834-5968-45e4-af2d-e13a2539d13e-kube-api-access-qv42b\") pod \"nova-cell1-novncproxy-0\" (UID: \"84894834-5968-45e4-af2d-e13a2539d13e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.548224 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.566698 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.578044 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.580423 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.583975 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.591318 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.619894 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5bbf\" (UniqueName: \"kubernetes.io/projected/77e8adf4-e276-4c90-b0c6-59f8806a0fc9-kube-api-access-k5bbf\") pod \"nova-scheduler-0\" (UID: \"77e8adf4-e276-4c90-b0c6-59f8806a0fc9\") " pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.620069 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e8adf4-e276-4c90-b0c6-59f8806a0fc9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"77e8adf4-e276-4c90-b0c6-59f8806a0fc9\") " pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.620178 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e8adf4-e276-4c90-b0c6-59f8806a0fc9-config-data\") pod \"nova-scheduler-0\" (UID: \"77e8adf4-e276-4c90-b0c6-59f8806a0fc9\") " pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.693815 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.722578 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5bbf\" (UniqueName: \"kubernetes.io/projected/77e8adf4-e276-4c90-b0c6-59f8806a0fc9-kube-api-access-k5bbf\") pod \"nova-scheduler-0\" (UID: \"77e8adf4-e276-4c90-b0c6-59f8806a0fc9\") " pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.722661 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e8adf4-e276-4c90-b0c6-59f8806a0fc9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"77e8adf4-e276-4c90-b0c6-59f8806a0fc9\") " pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.722708 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e8adf4-e276-4c90-b0c6-59f8806a0fc9-config-data\") pod \"nova-scheduler-0\" (UID: \"77e8adf4-e276-4c90-b0c6-59f8806a0fc9\") " pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.726773 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e8adf4-e276-4c90-b0c6-59f8806a0fc9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"77e8adf4-e276-4c90-b0c6-59f8806a0fc9\") " pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.732292 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e8adf4-e276-4c90-b0c6-59f8806a0fc9-config-data\") pod \"nova-scheduler-0\" (UID: \"77e8adf4-e276-4c90-b0c6-59f8806a0fc9\") " pod="openstack/nova-scheduler-0" Jan 21 16:06:58 crc kubenswrapper[4834]: I0121 16:06:58.739593 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5bbf\" (UniqueName: \"kubernetes.io/projected/77e8adf4-e276-4c90-b0c6-59f8806a0fc9-kube-api-access-k5bbf\") pod \"nova-scheduler-0\" (UID: \"77e8adf4-e276-4c90-b0c6-59f8806a0fc9\") " pod="openstack/nova-scheduler-0" Jan 21 16:06:59 crc kubenswrapper[4834]: I0121 16:06:58.999710 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:06:59 crc kubenswrapper[4834]: I0121 16:06:59.356215 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:06:59 crc kubenswrapper[4834]: I0121 16:06:59.453737 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:06:59 crc kubenswrapper[4834]: W0121 16:06:59.461128 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77e8adf4_e276_4c90_b0c6_59f8806a0fc9.slice/crio-6dc0fb3c10152105687aa6d6aa01d31ea9c6197c0834be0e90845d6b57d6fb49 WatchSource:0}: Error finding container 6dc0fb3c10152105687aa6d6aa01d31ea9c6197c0834be0e90845d6b57d6fb49: Status 404 returned error can't find the container with id 6dc0fb3c10152105687aa6d6aa01d31ea9c6197c0834be0e90845d6b57d6fb49 Jan 21 16:06:59 crc kubenswrapper[4834]: I0121 16:06:59.928712 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": read tcp 10.217.0.2:55746->10.217.1.69:8774: read: connection reset by peer" Jan 21 16:06:59 crc kubenswrapper[4834]: I0121 16:06:59.929327 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": read tcp 10.217.0.2:55752->10.217.1.69:8774: read: connection reset by peer" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.006611 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.014457 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="9f448b4f-7224-48bd-8311-ba7ab9b018d7" containerName="nova-cell0-conductor-conductor" containerID="cri-o://347b321015cdc5d6d9d23e7abae86039e9e9c0fb8be019ca447bbb4da6c01729" gracePeriod=30 Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.242212 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84894834-5968-45e4-af2d-e13a2539d13e","Type":"ContainerStarted","Data":"aa2a67a7eb8da4ec433d7b0090757e3f12cc7512290d165349efb6c4662e148e"} Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.242621 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84894834-5968-45e4-af2d-e13a2539d13e","Type":"ContainerStarted","Data":"232f359b5102b88e0a98e665deeacfa159b7a7c2010a062bbcf67b8cf4b991d9"} Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.244463 4834 generic.go:334] "Generic (PLEG): container finished" podID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerID="a8dae434e29611ebb5a179b8059457be0063e1e5074803392d962ace277d0b2c" exitCode=0 Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.244545 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abf7ce0f-5d02-4da5-b659-05af1350a7f1","Type":"ContainerDied","Data":"a8dae434e29611ebb5a179b8059457be0063e1e5074803392d962ace277d0b2c"} Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.246885 4834 generic.go:334] "Generic (PLEG): container finished" podID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerID="f0728f9ad2148e12e5d07e7199bd24bdd2a095edc118e405966c253e6cdbcc6a" exitCode=0 Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.247028 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c225f7dc-2224-4c32-a888-b9e56d52136f","Type":"ContainerDied","Data":"f0728f9ad2148e12e5d07e7199bd24bdd2a095edc118e405966c253e6cdbcc6a"} Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.255657 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"77e8adf4-e276-4c90-b0c6-59f8806a0fc9","Type":"ContainerStarted","Data":"e60a767437e88e53da7a1dd48f28bb79804ac55e7b1e164e3f4943783b91069a"} Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.255711 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"77e8adf4-e276-4c90-b0c6-59f8806a0fc9","Type":"ContainerStarted","Data":"6dc0fb3c10152105687aa6d6aa01d31ea9c6197c0834be0e90845d6b57d6fb49"} Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.263643 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.26362217 podStartE2EDuration="2.26362217s" podCreationTimestamp="2026-01-21 16:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:00.259588714 +0000 UTC m=+5766.233937769" watchObservedRunningTime="2026-01-21 16:07:00.26362217 +0000 UTC m=+5766.237971215" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.301321 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.301297086 podStartE2EDuration="2.301297086s" podCreationTimestamp="2026-01-21 16:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:00.289196938 +0000 UTC m=+5766.263545993" watchObservedRunningTime="2026-01-21 16:07:00.301297086 +0000 UTC m=+5766.275646131" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.567353 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="642eadf4-e07e-4a88-9f26-195713f66f79" path="/var/lib/kubelet/pods/642eadf4-e07e-4a88-9f26-195713f66f79/volumes" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.573340 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9fca2aa-81cc-4784-8104-fe7e118c3c17" path="/var/lib/kubelet/pods/e9fca2aa-81cc-4784-8104-fe7e118c3c17/volumes" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.724247 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.878177 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-combined-ca-bundle\") pod \"c225f7dc-2224-4c32-a888-b9e56d52136f\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.878336 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf5bw\" (UniqueName: \"kubernetes.io/projected/c225f7dc-2224-4c32-a888-b9e56d52136f-kube-api-access-cf5bw\") pod \"c225f7dc-2224-4c32-a888-b9e56d52136f\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.878437 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-config-data\") pod \"c225f7dc-2224-4c32-a888-b9e56d52136f\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.878463 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c225f7dc-2224-4c32-a888-b9e56d52136f-logs\") pod \"c225f7dc-2224-4c32-a888-b9e56d52136f\" (UID: \"c225f7dc-2224-4c32-a888-b9e56d52136f\") " Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.886351 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c225f7dc-2224-4c32-a888-b9e56d52136f-logs" (OuterVolumeSpecName: "logs") pod "c225f7dc-2224-4c32-a888-b9e56d52136f" (UID: "c225f7dc-2224-4c32-a888-b9e56d52136f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.910352 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c225f7dc-2224-4c32-a888-b9e56d52136f-kube-api-access-cf5bw" (OuterVolumeSpecName: "kube-api-access-cf5bw") pod "c225f7dc-2224-4c32-a888-b9e56d52136f" (UID: "c225f7dc-2224-4c32-a888-b9e56d52136f"). InnerVolumeSpecName "kube-api-access-cf5bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.937955 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-config-data" (OuterVolumeSpecName: "config-data") pod "c225f7dc-2224-4c32-a888-b9e56d52136f" (UID: "c225f7dc-2224-4c32-a888-b9e56d52136f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.962561 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c225f7dc-2224-4c32-a888-b9e56d52136f" (UID: "c225f7dc-2224-4c32-a888-b9e56d52136f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.981617 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.983982 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c225f7dc-2224-4c32-a888-b9e56d52136f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.983993 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c225f7dc-2224-4c32-a888-b9e56d52136f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:00 crc kubenswrapper[4834]: I0121 16:07:00.984006 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf5bw\" (UniqueName: \"kubernetes.io/projected/c225f7dc-2224-4c32-a888-b9e56d52136f-kube-api-access-cf5bw\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.054610 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.197909 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvxrj\" (UniqueName: \"kubernetes.io/projected/abf7ce0f-5d02-4da5-b659-05af1350a7f1-kube-api-access-dvxrj\") pod \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.199191 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-combined-ca-bundle\") pod \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.199737 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-config-data\") pod \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.202886 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf7ce0f-5d02-4da5-b659-05af1350a7f1-kube-api-access-dvxrj" (OuterVolumeSpecName: "kube-api-access-dvxrj") pod "abf7ce0f-5d02-4da5-b659-05af1350a7f1" (UID: "abf7ce0f-5d02-4da5-b659-05af1350a7f1"). InnerVolumeSpecName "kube-api-access-dvxrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.203052 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf7ce0f-5d02-4da5-b659-05af1350a7f1-logs\") pod \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\" (UID: \"abf7ce0f-5d02-4da5-b659-05af1350a7f1\") " Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.203874 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvxrj\" (UniqueName: \"kubernetes.io/projected/abf7ce0f-5d02-4da5-b659-05af1350a7f1-kube-api-access-dvxrj\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.204290 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abf7ce0f-5d02-4da5-b659-05af1350a7f1-logs" (OuterVolumeSpecName: "logs") pod "abf7ce0f-5d02-4da5-b659-05af1350a7f1" (UID: "abf7ce0f-5d02-4da5-b659-05af1350a7f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:01 crc kubenswrapper[4834]: E0121 16:07:01.217157 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 16:07:01 crc kubenswrapper[4834]: E0121 16:07:01.219316 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 16:07:01 crc kubenswrapper[4834]: E0121 16:07:01.231485 4834 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 21 16:07:01 crc kubenswrapper[4834]: E0121 16:07:01.231592 4834 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="72f890ac-74e2-4a65-abb3-1383e236e6a9" containerName="nova-cell1-conductor-conductor" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.240384 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abf7ce0f-5d02-4da5-b659-05af1350a7f1" (UID: "abf7ce0f-5d02-4da5-b659-05af1350a7f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.269057 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-config-data" (OuterVolumeSpecName: "config-data") pod "abf7ce0f-5d02-4da5-b659-05af1350a7f1" (UID: "abf7ce0f-5d02-4da5-b659-05af1350a7f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.272602 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c225f7dc-2224-4c32-a888-b9e56d52136f","Type":"ContainerDied","Data":"e2c4ac198fb9595cb69fd67b8a389a79623fcff486b08625a62177a7ccdafcaf"} Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.272662 4834 scope.go:117] "RemoveContainer" containerID="f0728f9ad2148e12e5d07e7199bd24bdd2a095edc118e405966c253e6cdbcc6a" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.272825 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.285622 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abf7ce0f-5d02-4da5-b659-05af1350a7f1","Type":"ContainerDied","Data":"26cf783b1d35e450cb3438146a22598ae1c984df620d59b3e6e05d409cbbde50"} Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.285716 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.310122 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.310155 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf7ce0f-5d02-4da5-b659-05af1350a7f1-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.310169 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf7ce0f-5d02-4da5-b659-05af1350a7f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.311152 4834 scope.go:117] "RemoveContainer" containerID="0d8171c55ff9fac2457f149ddb426c7b4ad52b1776a81221c48e8e27702ff13d" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.348066 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.380998 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.386397 4834 scope.go:117] "RemoveContainer" containerID="a8dae434e29611ebb5a179b8059457be0063e1e5074803392d962ace277d0b2c" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.396910 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.409114 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:07:01 crc kubenswrapper[4834]: E0121 16:07:01.409612 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-log" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.409626 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-log" Jan 21 16:07:01 crc kubenswrapper[4834]: E0121 16:07:01.409642 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-log" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.409648 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-log" Jan 21 16:07:01 crc kubenswrapper[4834]: E0121 16:07:01.409676 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-metadata" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.409682 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-metadata" Jan 21 16:07:01 crc kubenswrapper[4834]: E0121 16:07:01.409706 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-api" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.409711 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-api" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.409911 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-log" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.409919 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-log" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.409956 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-metadata" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.409967 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" containerName="nova-api-api" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.413900 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.418816 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.420003 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.429153 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.447028 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.449393 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.453634 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.457332 4834 scope.go:117] "RemoveContainer" containerID="ef4fe6c7727b30f3be31cb7eff75f90a294ec8656a5a885e23cd7ac8569894f5" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.460143 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.514512 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d4gx\" (UniqueName: \"kubernetes.io/projected/e614d11c-ce9d-42e2-8805-d3a0da859e7f-kube-api-access-2d4gx\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.514674 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e614d11c-ce9d-42e2-8805-d3a0da859e7f-logs\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.514724 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e614d11c-ce9d-42e2-8805-d3a0da859e7f-config-data\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.514754 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e614d11c-ce9d-42e2-8805-d3a0da859e7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.515045 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thpp5\" (UniqueName: \"kubernetes.io/projected/4317aa50-b40b-4725-ac51-62d674c1a05c-kube-api-access-thpp5\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.515112 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4317aa50-b40b-4725-ac51-62d674c1a05c-logs\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.515214 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4317aa50-b40b-4725-ac51-62d674c1a05c-config-data\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.515402 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4317aa50-b40b-4725-ac51-62d674c1a05c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.616624 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4317aa50-b40b-4725-ac51-62d674c1a05c-config-data\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.616711 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4317aa50-b40b-4725-ac51-62d674c1a05c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.616771 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d4gx\" (UniqueName: \"kubernetes.io/projected/e614d11c-ce9d-42e2-8805-d3a0da859e7f-kube-api-access-2d4gx\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.616831 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e614d11c-ce9d-42e2-8805-d3a0da859e7f-logs\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.616875 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e614d11c-ce9d-42e2-8805-d3a0da859e7f-config-data\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.616909 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e614d11c-ce9d-42e2-8805-d3a0da859e7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.616986 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thpp5\" (UniqueName: \"kubernetes.io/projected/4317aa50-b40b-4725-ac51-62d674c1a05c-kube-api-access-thpp5\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.617016 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4317aa50-b40b-4725-ac51-62d674c1a05c-logs\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.617527 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4317aa50-b40b-4725-ac51-62d674c1a05c-logs\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.618213 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e614d11c-ce9d-42e2-8805-d3a0da859e7f-logs\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.623351 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e614d11c-ce9d-42e2-8805-d3a0da859e7f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.624096 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e614d11c-ce9d-42e2-8805-d3a0da859e7f-config-data\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.632381 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4317aa50-b40b-4725-ac51-62d674c1a05c-config-data\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.634584 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4317aa50-b40b-4725-ac51-62d674c1a05c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.638262 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thpp5\" (UniqueName: \"kubernetes.io/projected/4317aa50-b40b-4725-ac51-62d674c1a05c-kube-api-access-thpp5\") pod \"nova-api-0\" (UID: \"4317aa50-b40b-4725-ac51-62d674c1a05c\") " pod="openstack/nova-api-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.641206 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d4gx\" (UniqueName: \"kubernetes.io/projected/e614d11c-ce9d-42e2-8805-d3a0da859e7f-kube-api-access-2d4gx\") pod \"nova-metadata-0\" (UID: \"e614d11c-ce9d-42e2-8805-d3a0da859e7f\") " pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.754781 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:07:01 crc kubenswrapper[4834]: I0121 16:07:01.777725 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.224492 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.260992 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-combined-ca-bundle\") pod \"72f890ac-74e2-4a65-abb3-1383e236e6a9\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.261239 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm26x\" (UniqueName: \"kubernetes.io/projected/72f890ac-74e2-4a65-abb3-1383e236e6a9-kube-api-access-xm26x\") pod \"72f890ac-74e2-4a65-abb3-1383e236e6a9\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.261367 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-config-data\") pod \"72f890ac-74e2-4a65-abb3-1383e236e6a9\" (UID: \"72f890ac-74e2-4a65-abb3-1383e236e6a9\") " Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.282223 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f890ac-74e2-4a65-abb3-1383e236e6a9-kube-api-access-xm26x" (OuterVolumeSpecName: "kube-api-access-xm26x") pod "72f890ac-74e2-4a65-abb3-1383e236e6a9" (UID: "72f890ac-74e2-4a65-abb3-1383e236e6a9"). InnerVolumeSpecName "kube-api-access-xm26x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.300527 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-config-data" (OuterVolumeSpecName: "config-data") pod "72f890ac-74e2-4a65-abb3-1383e236e6a9" (UID: "72f890ac-74e2-4a65-abb3-1383e236e6a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.304374 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72f890ac-74e2-4a65-abb3-1383e236e6a9" (UID: "72f890ac-74e2-4a65-abb3-1383e236e6a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.311652 4834 generic.go:334] "Generic (PLEG): container finished" podID="72f890ac-74e2-4a65-abb3-1383e236e6a9" containerID="1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b" exitCode=0 Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.311773 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"72f890ac-74e2-4a65-abb3-1383e236e6a9","Type":"ContainerDied","Data":"1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b"} Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.311809 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"72f890ac-74e2-4a65-abb3-1383e236e6a9","Type":"ContainerDied","Data":"02b65e391a75253c1b6b79b7f54952efee5ee713165e05dc7e2b2af0221a769b"} Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.311774 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.311831 4834 scope.go:117] "RemoveContainer" containerID="1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.324713 4834 generic.go:334] "Generic (PLEG): container finished" podID="9f448b4f-7224-48bd-8311-ba7ab9b018d7" containerID="347b321015cdc5d6d9d23e7abae86039e9e9c0fb8be019ca447bbb4da6c01729" exitCode=0 Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.337726 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" path="/var/lib/kubelet/pods/abf7ce0f-5d02-4da5-b659-05af1350a7f1/volumes" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.338780 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c225f7dc-2224-4c32-a888-b9e56d52136f" path="/var/lib/kubelet/pods/c225f7dc-2224-4c32-a888-b9e56d52136f/volumes" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.340007 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f448b4f-7224-48bd-8311-ba7ab9b018d7","Type":"ContainerDied","Data":"347b321015cdc5d6d9d23e7abae86039e9e9c0fb8be019ca447bbb4da6c01729"} Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.355056 4834 scope.go:117] "RemoveContainer" containerID="1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b" Jan 21 16:07:02 crc kubenswrapper[4834]: E0121 16:07:02.357737 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b\": container with ID starting with 1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b not found: ID does not exist" containerID="1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.357784 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b"} err="failed to get container status \"1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b\": rpc error: code = NotFound desc = could not find container \"1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b\": container with ID starting with 1eb7ca48edaec85e232027133eb80acacb7f443bb2ae11504e47e79c8c1c155b not found: ID does not exist" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.368269 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.368302 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm26x\" (UniqueName: \"kubernetes.io/projected/72f890ac-74e2-4a65-abb3-1383e236e6a9-kube-api-access-xm26x\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.368311 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72f890ac-74e2-4a65-abb3-1383e236e6a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.371848 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.382655 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.393835 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:07:02 crc kubenswrapper[4834]: E0121 16:07:02.394229 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f890ac-74e2-4a65-abb3-1383e236e6a9" containerName="nova-cell1-conductor-conductor" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.394244 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f890ac-74e2-4a65-abb3-1383e236e6a9" containerName="nova-cell1-conductor-conductor" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.394443 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f890ac-74e2-4a65-abb3-1383e236e6a9" containerName="nova-cell1-conductor-conductor" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.395108 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.398238 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.401756 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.435617 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.471513 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99d8448-b419-4217-9191-492bb9d4bd74-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f99d8448-b419-4217-9191-492bb9d4bd74\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.471578 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99d8448-b419-4217-9191-492bb9d4bd74-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f99d8448-b419-4217-9191-492bb9d4bd74\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.471761 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwckw\" (UniqueName: \"kubernetes.io/projected/f99d8448-b419-4217-9191-492bb9d4bd74-kube-api-access-xwckw\") pod \"nova-cell1-conductor-0\" (UID: \"f99d8448-b419-4217-9191-492bb9d4bd74\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.573340 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99d8448-b419-4217-9191-492bb9d4bd74-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f99d8448-b419-4217-9191-492bb9d4bd74\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.573761 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99d8448-b419-4217-9191-492bb9d4bd74-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f99d8448-b419-4217-9191-492bb9d4bd74\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.573817 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwckw\" (UniqueName: \"kubernetes.io/projected/f99d8448-b419-4217-9191-492bb9d4bd74-kube-api-access-xwckw\") pod \"nova-cell1-conductor-0\" (UID: \"f99d8448-b419-4217-9191-492bb9d4bd74\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.577682 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f99d8448-b419-4217-9191-492bb9d4bd74-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f99d8448-b419-4217-9191-492bb9d4bd74\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.578757 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.582071 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f99d8448-b419-4217-9191-492bb9d4bd74-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f99d8448-b419-4217-9191-492bb9d4bd74\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.599002 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwckw\" (UniqueName: \"kubernetes.io/projected/f99d8448-b419-4217-9191-492bb9d4bd74-kube-api-access-xwckw\") pod \"nova-cell1-conductor-0\" (UID: \"f99d8448-b419-4217-9191-492bb9d4bd74\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.642772 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.724250 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.776765 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-config-data\") pod \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.777032 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z96k\" (UniqueName: \"kubernetes.io/projected/9f448b4f-7224-48bd-8311-ba7ab9b018d7-kube-api-access-4z96k\") pod \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.777278 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-combined-ca-bundle\") pod \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\" (UID: \"9f448b4f-7224-48bd-8311-ba7ab9b018d7\") " Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.786873 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f448b4f-7224-48bd-8311-ba7ab9b018d7-kube-api-access-4z96k" (OuterVolumeSpecName: "kube-api-access-4z96k") pod "9f448b4f-7224-48bd-8311-ba7ab9b018d7" (UID: "9f448b4f-7224-48bd-8311-ba7ab9b018d7"). InnerVolumeSpecName "kube-api-access-4z96k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.805633 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f448b4f-7224-48bd-8311-ba7ab9b018d7" (UID: "9f448b4f-7224-48bd-8311-ba7ab9b018d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.813434 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-config-data" (OuterVolumeSpecName: "config-data") pod "9f448b4f-7224-48bd-8311-ba7ab9b018d7" (UID: "9f448b4f-7224-48bd-8311-ba7ab9b018d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.885965 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.885999 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f448b4f-7224-48bd-8311-ba7ab9b018d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:02 crc kubenswrapper[4834]: I0121 16:07:02.886013 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z96k\" (UniqueName: \"kubernetes.io/projected/9f448b4f-7224-48bd-8311-ba7ab9b018d7-kube-api-access-4z96k\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.195722 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.352691 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4317aa50-b40b-4725-ac51-62d674c1a05c","Type":"ContainerStarted","Data":"abed862d5ee499467a2e4ca7ea197b9c423ade69cfb51e73ddc30921456fe136"} Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.353105 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4317aa50-b40b-4725-ac51-62d674c1a05c","Type":"ContainerStarted","Data":"826447e1ff9a4fa124c74750e235c637bcea2a3fd712d80e6ff3e753d6eebc71"} Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.359006 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9f448b4f-7224-48bd-8311-ba7ab9b018d7","Type":"ContainerDied","Data":"88fac1af5322517e74e40ead2752f009cc3a7b66a7f3cdc043a5526d15814ce9"} Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.359067 4834 scope.go:117] "RemoveContainer" containerID="347b321015cdc5d6d9d23e7abae86039e9e9c0fb8be019ca447bbb4da6c01729" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.359255 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.363761 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e614d11c-ce9d-42e2-8805-d3a0da859e7f","Type":"ContainerStarted","Data":"9926f8a8440e73a691b03de0835b518b52b4f899ee4838e4d25bea2b41916570"} Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.363799 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e614d11c-ce9d-42e2-8805-d3a0da859e7f","Type":"ContainerStarted","Data":"5845c60b9bf46f171019920fed184d6f81c90982247946fdb6d84e39c553faac"} Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.363811 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e614d11c-ce9d-42e2-8805-d3a0da859e7f","Type":"ContainerStarted","Data":"a4ac1418aeff65d56ff5df575fc21c0fd7c0fd534b75e291b0dddd1a90e95a45"} Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.368153 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f99d8448-b419-4217-9191-492bb9d4bd74","Type":"ContainerStarted","Data":"175010e2a37b1d1968baffb000025b852a3d9ec31866a762caaf8d07d1fe9825"} Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.398155 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.411775 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.428066 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:07:03 crc kubenswrapper[4834]: E0121 16:07:03.428502 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f448b4f-7224-48bd-8311-ba7ab9b018d7" containerName="nova-cell0-conductor-conductor" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.428522 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f448b4f-7224-48bd-8311-ba7ab9b018d7" containerName="nova-cell0-conductor-conductor" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.428685 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f448b4f-7224-48bd-8311-ba7ab9b018d7" containerName="nova-cell0-conductor-conductor" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.429366 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.431774 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.434655 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.522175 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b033f2-d854-4c8b-870b-bb9ad473ec3b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"62b033f2-d854-4c8b-870b-bb9ad473ec3b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.522619 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b033f2-d854-4c8b-870b-bb9ad473ec3b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"62b033f2-d854-4c8b-870b-bb9ad473ec3b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.522725 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2h4m\" (UniqueName: \"kubernetes.io/projected/62b033f2-d854-4c8b-870b-bb9ad473ec3b-kube-api-access-q2h4m\") pod \"nova-cell0-conductor-0\" (UID: \"62b033f2-d854-4c8b-870b-bb9ad473ec3b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.624642 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b033f2-d854-4c8b-870b-bb9ad473ec3b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"62b033f2-d854-4c8b-870b-bb9ad473ec3b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.624703 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2h4m\" (UniqueName: \"kubernetes.io/projected/62b033f2-d854-4c8b-870b-bb9ad473ec3b-kube-api-access-q2h4m\") pod \"nova-cell0-conductor-0\" (UID: \"62b033f2-d854-4c8b-870b-bb9ad473ec3b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.624789 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b033f2-d854-4c8b-870b-bb9ad473ec3b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"62b033f2-d854-4c8b-870b-bb9ad473ec3b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.630497 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62b033f2-d854-4c8b-870b-bb9ad473ec3b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"62b033f2-d854-4c8b-870b-bb9ad473ec3b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.632558 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62b033f2-d854-4c8b-870b-bb9ad473ec3b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"62b033f2-d854-4c8b-870b-bb9ad473ec3b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.647872 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2h4m\" (UniqueName: \"kubernetes.io/projected/62b033f2-d854-4c8b-870b-bb9ad473ec3b-kube-api-access-q2h4m\") pod \"nova-cell0-conductor-0\" (UID: \"62b033f2-d854-4c8b-870b-bb9ad473ec3b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.694580 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:07:03 crc kubenswrapper[4834]: I0121 16:07:03.756329 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.002448 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.213385 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.345702 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72f890ac-74e2-4a65-abb3-1383e236e6a9" path="/var/lib/kubelet/pods/72f890ac-74e2-4a65-abb3-1383e236e6a9/volumes" Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.346704 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f448b4f-7224-48bd-8311-ba7ab9b018d7" path="/var/lib/kubelet/pods/9f448b4f-7224-48bd-8311-ba7ab9b018d7/volumes" Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.389110 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"62b033f2-d854-4c8b-870b-bb9ad473ec3b","Type":"ContainerStarted","Data":"155ea322821c7f0bb963de55612033cc432f78c4b3237f7256d1d184526bcd7f"} Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.390795 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f99d8448-b419-4217-9191-492bb9d4bd74","Type":"ContainerStarted","Data":"863ed63dffa3964b69b12bd44863a1da8ea479d15f8bca162ac1cc38744b83b0"} Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.391048 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.394919 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4317aa50-b40b-4725-ac51-62d674c1a05c","Type":"ContainerStarted","Data":"4daad1fba85a753962c69551cb1363a29c11a96a2ab1a667993aef893893c28d"} Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.445492 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.44547223 podStartE2EDuration="2.44547223s" podCreationTimestamp="2026-01-21 16:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:04.438126801 +0000 UTC m=+5770.412475836" watchObservedRunningTime="2026-01-21 16:07:04.44547223 +0000 UTC m=+5770.419821275" Jan 21 16:07:04 crc kubenswrapper[4834]: I0121 16:07:04.471132 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.471101979 podStartE2EDuration="3.471101979s" podCreationTimestamp="2026-01-21 16:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:04.464643179 +0000 UTC m=+5770.438992224" watchObservedRunningTime="2026-01-21 16:07:04.471101979 +0000 UTC m=+5770.445451044" Jan 21 16:07:05 crc kubenswrapper[4834]: I0121 16:07:05.416047 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"62b033f2-d854-4c8b-870b-bb9ad473ec3b","Type":"ContainerStarted","Data":"2a1cdf254ff36b29653e71d5faabbf9c469cd70382a252e0511eaf31dd909a8e"} Jan 21 16:07:05 crc kubenswrapper[4834]: I0121 16:07:05.416455 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:05 crc kubenswrapper[4834]: I0121 16:07:05.447041 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.447020195 podStartE2EDuration="4.447020195s" podCreationTimestamp="2026-01-21 16:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:04.491391253 +0000 UTC m=+5770.465740298" watchObservedRunningTime="2026-01-21 16:07:05.447020195 +0000 UTC m=+5771.421369240" Jan 21 16:07:05 crc kubenswrapper[4834]: I0121 16:07:05.450040 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.450025408 podStartE2EDuration="2.450025408s" podCreationTimestamp="2026-01-21 16:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:05.439698505 +0000 UTC m=+5771.414047550" watchObservedRunningTime="2026-01-21 16:07:05.450025408 +0000 UTC m=+5771.424374453" Jan 21 16:07:05 crc kubenswrapper[4834]: I0121 16:07:05.809471 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:05 crc kubenswrapper[4834]: I0121 16:07:05.809530 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="abf7ce0f-5d02-4da5-b659-05af1350a7f1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:06 crc kubenswrapper[4834]: I0121 16:07:06.755129 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:07:06 crc kubenswrapper[4834]: I0121 16:07:06.756539 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:07:08 crc kubenswrapper[4834]: I0121 16:07:08.694416 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:07:08 crc kubenswrapper[4834]: I0121 16:07:08.712614 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:07:09 crc kubenswrapper[4834]: I0121 16:07:09.002090 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:07:09 crc kubenswrapper[4834]: I0121 16:07:09.028784 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:07:09 crc kubenswrapper[4834]: I0121 16:07:09.459485 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:07:09 crc kubenswrapper[4834]: I0121 16:07:09.479287 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4834]: I0121 16:07:10.326693 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:07:10 crc kubenswrapper[4834]: E0121 16:07:10.327389 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:07:11 crc kubenswrapper[4834]: I0121 16:07:11.755548 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:07:11 crc kubenswrapper[4834]: I0121 16:07:11.755616 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:07:11 crc kubenswrapper[4834]: I0121 16:07:11.780708 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:07:11 crc kubenswrapper[4834]: I0121 16:07:11.780753 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:07:12 crc kubenswrapper[4834]: I0121 16:07:12.765353 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 16:07:12 crc kubenswrapper[4834]: I0121 16:07:12.919275 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4317aa50-b40b-4725-ac51-62d674c1a05c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.80:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:12 crc kubenswrapper[4834]: I0121 16:07:12.919275 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e614d11c-ce9d-42e2-8805-d3a0da859e7f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:12 crc kubenswrapper[4834]: I0121 16:07:12.919289 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e614d11c-ce9d-42e2-8805-d3a0da859e7f" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.79:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:12 crc kubenswrapper[4834]: I0121 16:07:12.919332 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4317aa50-b40b-4725-ac51-62d674c1a05c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.80:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:13 crc kubenswrapper[4834]: I0121 16:07:13.783871 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.925816 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.927311 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.931351 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.942581 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.974755 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.974805 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg8x5\" (UniqueName: \"kubernetes.io/projected/03977b05-a36b-41b7-babf-113972dbb34f-kube-api-access-jg8x5\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.974837 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-scripts\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.975043 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.975228 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:15 crc kubenswrapper[4834]: I0121 16:07:15.975641 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03977b05-a36b-41b7-babf-113972dbb34f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.078067 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.078454 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg8x5\" (UniqueName: \"kubernetes.io/projected/03977b05-a36b-41b7-babf-113972dbb34f-kube-api-access-jg8x5\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.078488 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-scripts\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.078565 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.078637 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.078758 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03977b05-a36b-41b7-babf-113972dbb34f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.078904 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03977b05-a36b-41b7-babf-113972dbb34f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.086491 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.092151 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.092269 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.093851 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-scripts\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.098234 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg8x5\" (UniqueName: \"kubernetes.io/projected/03977b05-a36b-41b7-babf-113972dbb34f-kube-api-access-jg8x5\") pod \"cinder-scheduler-0\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.254460 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:16 crc kubenswrapper[4834]: I0121 16:07:16.731109 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:16 crc kubenswrapper[4834]: W0121 16:07:16.733692 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03977b05_a36b_41b7_babf_113972dbb34f.slice/crio-12ff006d7cbba2619db8d17cf201d1561eedb9844306daa0974883ec354902fc WatchSource:0}: Error finding container 12ff006d7cbba2619db8d17cf201d1561eedb9844306daa0974883ec354902fc: Status 404 returned error can't find the container with id 12ff006d7cbba2619db8d17cf201d1561eedb9844306daa0974883ec354902fc Jan 21 16:07:17 crc kubenswrapper[4834]: I0121 16:07:17.543191 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03977b05-a36b-41b7-babf-113972dbb34f","Type":"ContainerStarted","Data":"49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42"} Jan 21 16:07:17 crc kubenswrapper[4834]: I0121 16:07:17.543828 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03977b05-a36b-41b7-babf-113972dbb34f","Type":"ContainerStarted","Data":"12ff006d7cbba2619db8d17cf201d1561eedb9844306daa0974883ec354902fc"} Jan 21 16:07:17 crc kubenswrapper[4834]: I0121 16:07:17.587510 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:17 crc kubenswrapper[4834]: I0121 16:07:17.587746 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" containerName="cinder-api-log" containerID="cri-o://9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6" gracePeriod=30 Jan 21 16:07:17 crc kubenswrapper[4834]: I0121 16:07:17.587852 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" containerName="cinder-api" containerID="cri-o://f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301" gracePeriod=30 Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.262796 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.266003 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.279885 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.311870 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382285 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382341 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382379 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mkhf\" (UniqueName: \"kubernetes.io/projected/b18a40a5-be9d-43a9-a420-52b71cf421b9-kube-api-access-5mkhf\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382403 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382429 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382444 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b18a40a5-be9d-43a9-a420-52b71cf421b9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382480 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382511 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382544 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382572 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382591 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382626 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382642 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382668 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382682 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.382715 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-run\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.484795 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.484851 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.484882 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mkhf\" (UniqueName: \"kubernetes.io/projected/b18a40a5-be9d-43a9-a420-52b71cf421b9-kube-api-access-5mkhf\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.484901 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.484919 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.484953 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b18a40a5-be9d-43a9-a420-52b71cf421b9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.484990 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485011 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485022 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485042 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485105 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485149 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485175 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485245 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485268 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485288 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485327 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485352 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485423 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-run\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485682 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.485717 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.486157 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.486492 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.486751 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-run\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.486790 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.486807 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b18a40a5-be9d-43a9-a420-52b71cf421b9-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.491255 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b18a40a5-be9d-43a9-a420-52b71cf421b9-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.494120 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.494178 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.494519 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.491585 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b18a40a5-be9d-43a9-a420-52b71cf421b9-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.505663 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mkhf\" (UniqueName: \"kubernetes.io/projected/b18a40a5-be9d-43a9-a420-52b71cf421b9-kube-api-access-5mkhf\") pod \"cinder-volume-volume1-0\" (UID: \"b18a40a5-be9d-43a9-a420-52b71cf421b9\") " pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.552314 4834 generic.go:334] "Generic (PLEG): container finished" podID="0df6aeac-666e-4332-83f1-98702e74bf83" containerID="9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6" exitCode=143 Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.552381 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0df6aeac-666e-4332-83f1-98702e74bf83","Type":"ContainerDied","Data":"9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6"} Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.553823 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03977b05-a36b-41b7-babf-113972dbb34f","Type":"ContainerStarted","Data":"95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b"} Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.602401 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:18 crc kubenswrapper[4834]: I0121 16:07:18.605700 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.605679587 podStartE2EDuration="3.605679587s" podCreationTimestamp="2026-01-21 16:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:18.598671039 +0000 UTC m=+5784.573020084" watchObservedRunningTime="2026-01-21 16:07:18.605679587 +0000 UTC m=+5784.580028632" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.120202 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.126847 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.137000 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.166716 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304098 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-lib-modules\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304199 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-run\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304242 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bba744ba-b8e9-46e1-a4b9-95e30841864d-ceph\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304274 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-scripts\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304297 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92vh\" (UniqueName: \"kubernetes.io/projected/bba744ba-b8e9-46e1-a4b9-95e30841864d-kube-api-access-v92vh\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304331 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304360 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304404 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304489 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304554 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304573 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304592 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-sys\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304614 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304654 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-dev\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304706 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-config-data\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.304763 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.406549 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-config-data\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.406962 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.406997 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-lib-modules\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407041 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-run\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407079 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bba744ba-b8e9-46e1-a4b9-95e30841864d-ceph\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407111 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-scripts\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407136 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92vh\" (UniqueName: \"kubernetes.io/projected/bba744ba-b8e9-46e1-a4b9-95e30841864d-kube-api-access-v92vh\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407164 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407190 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407231 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407299 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407353 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407377 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407432 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407460 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-sys\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407495 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-dev\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407601 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-dev\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407653 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407686 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-lib-modules\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407713 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-run\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.407880 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.409113 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.409614 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.409684 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.409768 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.409793 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bba744ba-b8e9-46e1-a4b9-95e30841864d-sys\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.414294 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bba744ba-b8e9-46e1-a4b9-95e30841864d-ceph\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.414469 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.415869 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.418309 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-config-data\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.427639 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92vh\" (UniqueName: \"kubernetes.io/projected/bba744ba-b8e9-46e1-a4b9-95e30841864d-kube-api-access-v92vh\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.441408 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bba744ba-b8e9-46e1-a4b9-95e30841864d-scripts\") pod \"cinder-backup-0\" (UID: \"bba744ba-b8e9-46e1-a4b9-95e30841864d\") " pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.461961 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.473273 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 21 16:07:19 crc kubenswrapper[4834]: W0121 16:07:19.477944 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb18a40a5_be9d_43a9_a420_52b71cf421b9.slice/crio-2162174a102f3b9b1c0c31e9e368b30047403c311aff4a2af7d74e726c443e94 WatchSource:0}: Error finding container 2162174a102f3b9b1c0c31e9e368b30047403c311aff4a2af7d74e726c443e94: Status 404 returned error can't find the container with id 2162174a102f3b9b1c0c31e9e368b30047403c311aff4a2af7d74e726c443e94 Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.482186 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:07:19 crc kubenswrapper[4834]: I0121 16:07:19.564241 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b18a40a5-be9d-43a9-a420-52b71cf421b9","Type":"ContainerStarted","Data":"2162174a102f3b9b1c0c31e9e368b30047403c311aff4a2af7d74e726c443e94"} Jan 21 16:07:20 crc kubenswrapper[4834]: I0121 16:07:20.065962 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 16:07:20 crc kubenswrapper[4834]: W0121 16:07:20.066006 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbba744ba_b8e9_46e1_a4b9_95e30841864d.slice/crio-ddb9d78f6a35d3f82a97ea89e7bc6517f5d7852e9c0a6024e3a4f3d35db71190 WatchSource:0}: Error finding container ddb9d78f6a35d3f82a97ea89e7bc6517f5d7852e9c0a6024e3a4f3d35db71190: Status 404 returned error can't find the container with id ddb9d78f6a35d3f82a97ea89e7bc6517f5d7852e9c0a6024e3a4f3d35db71190 Jan 21 16:07:20 crc kubenswrapper[4834]: I0121 16:07:20.584235 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"bba744ba-b8e9-46e1-a4b9-95e30841864d","Type":"ContainerStarted","Data":"ddb9d78f6a35d3f82a97ea89e7bc6517f5d7852e9c0a6024e3a4f3d35db71190"} Jan 21 16:07:20 crc kubenswrapper[4834]: I0121 16:07:20.588331 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b18a40a5-be9d-43a9-a420-52b71cf421b9","Type":"ContainerStarted","Data":"b0a74d7baf348eca886f3e97efe367b65ef45686ca83fb973a0c58373cb52348"} Jan 21 16:07:20 crc kubenswrapper[4834]: I0121 16:07:20.747547 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.76:8776/healthcheck\": read tcp 10.217.0.2:48920->10.217.1.76:8776: read: connection reset by peer" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.191908 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.255550 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.366855 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df6aeac-666e-4332-83f1-98702e74bf83-etc-machine-id\") pod \"0df6aeac-666e-4332-83f1-98702e74bf83\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.366909 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data-custom\") pod \"0df6aeac-666e-4332-83f1-98702e74bf83\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.367004 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0df6aeac-666e-4332-83f1-98702e74bf83-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0df6aeac-666e-4332-83f1-98702e74bf83" (UID: "0df6aeac-666e-4332-83f1-98702e74bf83"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.367054 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-scripts\") pod \"0df6aeac-666e-4332-83f1-98702e74bf83\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.368042 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flxwt\" (UniqueName: \"kubernetes.io/projected/0df6aeac-666e-4332-83f1-98702e74bf83-kube-api-access-flxwt\") pod \"0df6aeac-666e-4332-83f1-98702e74bf83\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.368106 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-combined-ca-bundle\") pod \"0df6aeac-666e-4332-83f1-98702e74bf83\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.368153 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0df6aeac-666e-4332-83f1-98702e74bf83-logs\") pod \"0df6aeac-666e-4332-83f1-98702e74bf83\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.368184 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data\") pod \"0df6aeac-666e-4332-83f1-98702e74bf83\" (UID: \"0df6aeac-666e-4332-83f1-98702e74bf83\") " Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.368842 4834 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0df6aeac-666e-4332-83f1-98702e74bf83-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.369574 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0df6aeac-666e-4332-83f1-98702e74bf83-logs" (OuterVolumeSpecName: "logs") pod "0df6aeac-666e-4332-83f1-98702e74bf83" (UID: "0df6aeac-666e-4332-83f1-98702e74bf83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.371826 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df6aeac-666e-4332-83f1-98702e74bf83-kube-api-access-flxwt" (OuterVolumeSpecName: "kube-api-access-flxwt") pod "0df6aeac-666e-4332-83f1-98702e74bf83" (UID: "0df6aeac-666e-4332-83f1-98702e74bf83"). InnerVolumeSpecName "kube-api-access-flxwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.371916 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0df6aeac-666e-4332-83f1-98702e74bf83" (UID: "0df6aeac-666e-4332-83f1-98702e74bf83"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.372065 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-scripts" (OuterVolumeSpecName: "scripts") pod "0df6aeac-666e-4332-83f1-98702e74bf83" (UID: "0df6aeac-666e-4332-83f1-98702e74bf83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.416507 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0df6aeac-666e-4332-83f1-98702e74bf83" (UID: "0df6aeac-666e-4332-83f1-98702e74bf83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.426100 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data" (OuterVolumeSpecName: "config-data") pod "0df6aeac-666e-4332-83f1-98702e74bf83" (UID: "0df6aeac-666e-4332-83f1-98702e74bf83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.470888 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0df6aeac-666e-4332-83f1-98702e74bf83-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.470958 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.470974 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.470987 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.470997 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flxwt\" (UniqueName: \"kubernetes.io/projected/0df6aeac-666e-4332-83f1-98702e74bf83-kube-api-access-flxwt\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.471009 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0df6aeac-666e-4332-83f1-98702e74bf83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.608956 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"bba744ba-b8e9-46e1-a4b9-95e30841864d","Type":"ContainerStarted","Data":"8a2f7589491780c40adbe131ef5216ddd8db76e0c980b94bb8ba4b3439fbd95d"} Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.609010 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"bba744ba-b8e9-46e1-a4b9-95e30841864d","Type":"ContainerStarted","Data":"af611cbf2544160619d7db1a365ef64b1d7b6ebcfa9a45f2e54294a4f169d9cf"} Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.611309 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.611304 4834 generic.go:334] "Generic (PLEG): container finished" podID="0df6aeac-666e-4332-83f1-98702e74bf83" containerID="f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301" exitCode=0 Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.611326 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0df6aeac-666e-4332-83f1-98702e74bf83","Type":"ContainerDied","Data":"f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301"} Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.611483 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0df6aeac-666e-4332-83f1-98702e74bf83","Type":"ContainerDied","Data":"f18a2384102344622a154ed66757ce1c54b88111de726643d15724596c9e9308"} Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.611509 4834 scope.go:117] "RemoveContainer" containerID="f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.614411 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b18a40a5-be9d-43a9-a420-52b71cf421b9","Type":"ContainerStarted","Data":"4e3aba82c6dc4fa30ecfd29a410ebbd3cb2afc0cdcd262d6d430953b55ad08a9"} Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.642462 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=1.8324321270000001 podStartE2EDuration="2.642439994s" podCreationTimestamp="2026-01-21 16:07:19 +0000 UTC" firstStartedPulling="2026-01-21 16:07:20.069647643 +0000 UTC m=+5786.043996688" lastFinishedPulling="2026-01-21 16:07:20.8796555 +0000 UTC m=+5786.854004555" observedRunningTime="2026-01-21 16:07:21.641556096 +0000 UTC m=+5787.615905161" watchObservedRunningTime="2026-01-21 16:07:21.642439994 +0000 UTC m=+5787.616789039" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.658077 4834 scope.go:117] "RemoveContainer" containerID="9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.664322 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.819448 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.821290 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.821627 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.827989 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.828968 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.844177 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:21 crc kubenswrapper[4834]: E0121 16:07:21.844652 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" containerName="cinder-api" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.844665 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" containerName="cinder-api" Jan 21 16:07:21 crc kubenswrapper[4834]: E0121 16:07:21.844685 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" containerName="cinder-api-log" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.844691 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" containerName="cinder-api-log" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.844892 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" containerName="cinder-api-log" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.844905 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" containerName="cinder-api" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.845910 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.846166 4834 scope.go:117] "RemoveContainer" containerID="f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.846377 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.846436 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.850284 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:07:21 crc kubenswrapper[4834]: E0121 16:07:21.850401 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301\": container with ID starting with f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301 not found: ID does not exist" containerID="f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.850430 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301"} err="failed to get container status \"f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301\": rpc error: code = NotFound desc = could not find container \"f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301\": container with ID starting with f675003da87991852b0a134ccc2a02f6cab5c71857779da39c7c8c080ed09301 not found: ID does not exist" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.850456 4834 scope.go:117] "RemoveContainer" containerID="9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.850469 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:07:21 crc kubenswrapper[4834]: E0121 16:07:21.850869 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6\": container with ID starting with 9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6 not found: ID does not exist" containerID="9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.850900 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6"} err="failed to get container status \"9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6\": rpc error: code = NotFound desc = could not find container \"9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6\": container with ID starting with 9e2b4aaca3be4c39eb32c1553fc7bb6ec7c0e6f87cb3cfc24c11edd729aa10c6 not found: ID does not exist" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.857991 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.864599 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.869079 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.055861928 podStartE2EDuration="3.869053865s" podCreationTimestamp="2026-01-21 16:07:18 +0000 UTC" firstStartedPulling="2026-01-21 16:07:19.481937742 +0000 UTC m=+5785.456286787" lastFinishedPulling="2026-01-21 16:07:20.295129679 +0000 UTC m=+5786.269478724" observedRunningTime="2026-01-21 16:07:21.832914728 +0000 UTC m=+5787.807263773" watchObservedRunningTime="2026-01-21 16:07:21.869053865 +0000 UTC m=+5787.843402910" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.935498 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8xdb\" (UniqueName: \"kubernetes.io/projected/09e4f954-89a6-4faf-9021-0e848b28c7b4-kube-api-access-q8xdb\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.937488 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.941529 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.941769 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-config-data\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.941838 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09e4f954-89a6-4faf-9021-0e848b28c7b4-logs\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.941936 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-scripts\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:21 crc kubenswrapper[4834]: I0121 16:07:21.942501 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09e4f954-89a6-4faf-9021-0e848b28c7b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.046106 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8xdb\" (UniqueName: \"kubernetes.io/projected/09e4f954-89a6-4faf-9021-0e848b28c7b4-kube-api-access-q8xdb\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.046182 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.046217 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.046255 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09e4f954-89a6-4faf-9021-0e848b28c7b4-logs\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.046282 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-config-data\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.046305 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-scripts\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.046350 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09e4f954-89a6-4faf-9021-0e848b28c7b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.046463 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09e4f954-89a6-4faf-9021-0e848b28c7b4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.047387 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09e4f954-89a6-4faf-9021-0e848b28c7b4-logs\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.053969 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-config-data\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.054145 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-config-data-custom\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.055297 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-scripts\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.065527 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8xdb\" (UniqueName: \"kubernetes.io/projected/09e4f954-89a6-4faf-9021-0e848b28c7b4-kube-api-access-q8xdb\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.066010 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e4f954-89a6-4faf-9021-0e848b28c7b4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"09e4f954-89a6-4faf-9021-0e848b28c7b4\") " pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.192265 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.339194 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df6aeac-666e-4332-83f1-98702e74bf83" path="/var/lib/kubelet/pods/0df6aeac-666e-4332-83f1-98702e74bf83/volumes" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.627127 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.638113 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:07:22 crc kubenswrapper[4834]: I0121 16:07:22.738594 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:23 crc kubenswrapper[4834]: I0121 16:07:23.603019 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:23 crc kubenswrapper[4834]: I0121 16:07:23.647074 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09e4f954-89a6-4faf-9021-0e848b28c7b4","Type":"ContainerStarted","Data":"4543c1189b7c4ed0b816d1e1dcde0d1dfd6a217c68e3e59b5799679fbca2f6ff"} Jan 21 16:07:23 crc kubenswrapper[4834]: I0121 16:07:23.647125 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09e4f954-89a6-4faf-9021-0e848b28c7b4","Type":"ContainerStarted","Data":"d504a1f7cc3d2f240d3c82b2a2a233b56e099cadcb88eeda64463f0f99b16804"} Jan 21 16:07:24 crc kubenswrapper[4834]: I0121 16:07:24.336270 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:07:24 crc kubenswrapper[4834]: E0121 16:07:24.336907 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:07:24 crc kubenswrapper[4834]: I0121 16:07:24.464138 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 21 16:07:24 crc kubenswrapper[4834]: I0121 16:07:24.660240 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09e4f954-89a6-4faf-9021-0e848b28c7b4","Type":"ContainerStarted","Data":"81fc246c0ee76144a99bcb03e4155710b7812847df8f56503a1c8a50aa7dfe07"} Jan 21 16:07:24 crc kubenswrapper[4834]: I0121 16:07:24.692912 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.6928886370000003 podStartE2EDuration="3.692888637s" podCreationTimestamp="2026-01-21 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:24.689582193 +0000 UTC m=+5790.663931238" watchObservedRunningTime="2026-01-21 16:07:24.692888637 +0000 UTC m=+5790.667237682" Jan 21 16:07:25 crc kubenswrapper[4834]: I0121 16:07:25.672495 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:07:26 crc kubenswrapper[4834]: I0121 16:07:26.541867 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:07:26 crc kubenswrapper[4834]: I0121 16:07:26.606302 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:26 crc kubenswrapper[4834]: I0121 16:07:26.679645 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="03977b05-a36b-41b7-babf-113972dbb34f" containerName="cinder-scheduler" containerID="cri-o://49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42" gracePeriod=30 Jan 21 16:07:26 crc kubenswrapper[4834]: I0121 16:07:26.679750 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="03977b05-a36b-41b7-babf-113972dbb34f" containerName="probe" containerID="cri-o://95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b" gracePeriod=30 Jan 21 16:07:27 crc kubenswrapper[4834]: I0121 16:07:27.689032 4834 generic.go:334] "Generic (PLEG): container finished" podID="03977b05-a36b-41b7-babf-113972dbb34f" containerID="49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42" exitCode=0 Jan 21 16:07:27 crc kubenswrapper[4834]: I0121 16:07:27.689320 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03977b05-a36b-41b7-babf-113972dbb34f","Type":"ContainerDied","Data":"49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42"} Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.096818 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.178054 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-scripts\") pod \"03977b05-a36b-41b7-babf-113972dbb34f\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.178227 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-combined-ca-bundle\") pod \"03977b05-a36b-41b7-babf-113972dbb34f\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.178249 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data\") pod \"03977b05-a36b-41b7-babf-113972dbb34f\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.178285 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data-custom\") pod \"03977b05-a36b-41b7-babf-113972dbb34f\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.178357 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg8x5\" (UniqueName: \"kubernetes.io/projected/03977b05-a36b-41b7-babf-113972dbb34f-kube-api-access-jg8x5\") pod \"03977b05-a36b-41b7-babf-113972dbb34f\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.178414 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03977b05-a36b-41b7-babf-113972dbb34f-etc-machine-id\") pod \"03977b05-a36b-41b7-babf-113972dbb34f\" (UID: \"03977b05-a36b-41b7-babf-113972dbb34f\") " Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.178829 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03977b05-a36b-41b7-babf-113972dbb34f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "03977b05-a36b-41b7-babf-113972dbb34f" (UID: "03977b05-a36b-41b7-babf-113972dbb34f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.185057 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "03977b05-a36b-41b7-babf-113972dbb34f" (UID: "03977b05-a36b-41b7-babf-113972dbb34f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.185068 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03977b05-a36b-41b7-babf-113972dbb34f-kube-api-access-jg8x5" (OuterVolumeSpecName: "kube-api-access-jg8x5") pod "03977b05-a36b-41b7-babf-113972dbb34f" (UID: "03977b05-a36b-41b7-babf-113972dbb34f"). InnerVolumeSpecName "kube-api-access-jg8x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.190052 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-scripts" (OuterVolumeSpecName: "scripts") pod "03977b05-a36b-41b7-babf-113972dbb34f" (UID: "03977b05-a36b-41b7-babf-113972dbb34f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.247995 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03977b05-a36b-41b7-babf-113972dbb34f" (UID: "03977b05-a36b-41b7-babf-113972dbb34f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.280538 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.280570 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.280580 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.280592 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg8x5\" (UniqueName: \"kubernetes.io/projected/03977b05-a36b-41b7-babf-113972dbb34f-kube-api-access-jg8x5\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.280601 4834 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03977b05-a36b-41b7-babf-113972dbb34f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.296651 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data" (OuterVolumeSpecName: "config-data") pod "03977b05-a36b-41b7-babf-113972dbb34f" (UID: "03977b05-a36b-41b7-babf-113972dbb34f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.382118 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03977b05-a36b-41b7-babf-113972dbb34f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.700443 4834 generic.go:334] "Generic (PLEG): container finished" podID="03977b05-a36b-41b7-babf-113972dbb34f" containerID="95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b" exitCode=0 Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.700511 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03977b05-a36b-41b7-babf-113972dbb34f","Type":"ContainerDied","Data":"95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b"} Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.700524 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.700558 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"03977b05-a36b-41b7-babf-113972dbb34f","Type":"ContainerDied","Data":"12ff006d7cbba2619db8d17cf201d1561eedb9844306daa0974883ec354902fc"} Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.700583 4834 scope.go:117] "RemoveContainer" containerID="95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.725673 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.733605 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.735352 4834 scope.go:117] "RemoveContainer" containerID="49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.758770 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:28 crc kubenswrapper[4834]: E0121 16:07:28.759278 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03977b05-a36b-41b7-babf-113972dbb34f" containerName="probe" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.759307 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="03977b05-a36b-41b7-babf-113972dbb34f" containerName="probe" Jan 21 16:07:28 crc kubenswrapper[4834]: E0121 16:07:28.759321 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03977b05-a36b-41b7-babf-113972dbb34f" containerName="cinder-scheduler" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.759331 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="03977b05-a36b-41b7-babf-113972dbb34f" containerName="cinder-scheduler" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.759504 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="03977b05-a36b-41b7-babf-113972dbb34f" containerName="cinder-scheduler" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.759526 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="03977b05-a36b-41b7-babf-113972dbb34f" containerName="probe" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.760593 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.763125 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.769241 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.784675 4834 scope.go:117] "RemoveContainer" containerID="95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b" Jan 21 16:07:28 crc kubenswrapper[4834]: E0121 16:07:28.787418 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b\": container with ID starting with 95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b not found: ID does not exist" containerID="95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.787530 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b"} err="failed to get container status \"95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b\": rpc error: code = NotFound desc = could not find container \"95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b\": container with ID starting with 95a9b33e06158b22633f996cb855c612b0057906c7c638ebd6592f0d2867e51b not found: ID does not exist" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.787571 4834 scope.go:117] "RemoveContainer" containerID="49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42" Jan 21 16:07:28 crc kubenswrapper[4834]: E0121 16:07:28.788203 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42\": container with ID starting with 49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42 not found: ID does not exist" containerID="49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.788277 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42"} err="failed to get container status \"49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42\": rpc error: code = NotFound desc = could not find container \"49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42\": container with ID starting with 49505e9b32db952d12c07577af6e6c7c331acdda2457e9b36cc514ab67ce3e42 not found: ID does not exist" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.788366 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-config-data\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.788487 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh7tp\" (UniqueName: \"kubernetes.io/projected/2393786a-fa47-4d59-94a0-ec0e73f54392-kube-api-access-lh7tp\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.788557 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2393786a-fa47-4d59-94a0-ec0e73f54392-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.788622 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.788652 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-scripts\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.788763 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.855719 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.890723 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh7tp\" (UniqueName: \"kubernetes.io/projected/2393786a-fa47-4d59-94a0-ec0e73f54392-kube-api-access-lh7tp\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.890785 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2393786a-fa47-4d59-94a0-ec0e73f54392-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.890814 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.890844 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-scripts\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.890930 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.890996 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-config-data\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.891799 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2393786a-fa47-4d59-94a0-ec0e73f54392-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.894761 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.895245 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-config-data\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.897172 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-scripts\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.903399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2393786a-fa47-4d59-94a0-ec0e73f54392-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:28 crc kubenswrapper[4834]: I0121 16:07:28.912547 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh7tp\" (UniqueName: \"kubernetes.io/projected/2393786a-fa47-4d59-94a0-ec0e73f54392-kube-api-access-lh7tp\") pod \"cinder-scheduler-0\" (UID: \"2393786a-fa47-4d59-94a0-ec0e73f54392\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:29 crc kubenswrapper[4834]: I0121 16:07:29.102796 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:29 crc kubenswrapper[4834]: I0121 16:07:29.581663 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:29 crc kubenswrapper[4834]: I0121 16:07:29.664310 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 21 16:07:29 crc kubenswrapper[4834]: I0121 16:07:29.712266 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2393786a-fa47-4d59-94a0-ec0e73f54392","Type":"ContainerStarted","Data":"093ac1892fc80fc1859bec6334de1610d0f9fa6c21ca3b47704ee0881990cc4f"} Jan 21 16:07:30 crc kubenswrapper[4834]: I0121 16:07:30.346172 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03977b05-a36b-41b7-babf-113972dbb34f" path="/var/lib/kubelet/pods/03977b05-a36b-41b7-babf-113972dbb34f/volumes" Jan 21 16:07:30 crc kubenswrapper[4834]: I0121 16:07:30.746093 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2393786a-fa47-4d59-94a0-ec0e73f54392","Type":"ContainerStarted","Data":"7361ee069bdeee58431dc593634a0694f394b8f046d4203ffd82c31d3a53af03"} Jan 21 16:07:31 crc kubenswrapper[4834]: I0121 16:07:31.758860 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2393786a-fa47-4d59-94a0-ec0e73f54392","Type":"ContainerStarted","Data":"5af76bbbae45392dc56cd4b6524ef61a4eb672ca5732c2978ed61fa27a18749d"} Jan 21 16:07:31 crc kubenswrapper[4834]: I0121 16:07:31.786449 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.786430079 podStartE2EDuration="3.786430079s" podCreationTimestamp="2026-01-21 16:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:31.780337639 +0000 UTC m=+5797.754686704" watchObservedRunningTime="2026-01-21 16:07:31.786430079 +0000 UTC m=+5797.760779124" Jan 21 16:07:34 crc kubenswrapper[4834]: I0121 16:07:34.103712 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:07:34 crc kubenswrapper[4834]: I0121 16:07:34.246769 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:07:37 crc kubenswrapper[4834]: I0121 16:07:37.324320 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:07:37 crc kubenswrapper[4834]: E0121 16:07:37.325067 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:07:39 crc kubenswrapper[4834]: I0121 16:07:39.340596 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:07:49 crc kubenswrapper[4834]: I0121 16:07:49.324751 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:07:49 crc kubenswrapper[4834]: E0121 16:07:49.325763 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:08:00 crc kubenswrapper[4834]: I0121 16:08:00.325704 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:08:00 crc kubenswrapper[4834]: E0121 16:08:00.327094 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:08:08 crc kubenswrapper[4834]: I0121 16:08:08.408616 4834 scope.go:117] "RemoveContainer" containerID="365ca94985ca4696803efaaa0abd2a5c923fedf46aeb1e102ae101f5b41e9ab3" Jan 21 16:08:08 crc kubenswrapper[4834]: I0121 16:08:08.431246 4834 scope.go:117] "RemoveContainer" containerID="9a67b14dc83436e1300ee9354f16fe91ff319521cca9e6bbb873c9a90c6fb4cd" Jan 21 16:08:14 crc kubenswrapper[4834]: I0121 16:08:14.331191 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:08:14 crc kubenswrapper[4834]: E0121 16:08:14.333259 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:08:25 crc kubenswrapper[4834]: I0121 16:08:25.325457 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:08:25 crc kubenswrapper[4834]: E0121 16:08:25.326673 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.529730 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lmdx2"] Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.537838 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.551708 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmdx2"] Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.725942 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-utilities\") pod \"redhat-operators-lmdx2\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.728074 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-catalog-content\") pod \"redhat-operators-lmdx2\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.728125 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hvc6\" (UniqueName: \"kubernetes.io/projected/54ec10d5-fb56-44fe-9964-41474a46cac0-kube-api-access-6hvc6\") pod \"redhat-operators-lmdx2\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.830872 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-utilities\") pod \"redhat-operators-lmdx2\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.831012 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-catalog-content\") pod \"redhat-operators-lmdx2\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.831039 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hvc6\" (UniqueName: \"kubernetes.io/projected/54ec10d5-fb56-44fe-9964-41474a46cac0-kube-api-access-6hvc6\") pod \"redhat-operators-lmdx2\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.832147 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-catalog-content\") pod \"redhat-operators-lmdx2\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.832161 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-utilities\") pod \"redhat-operators-lmdx2\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.855009 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hvc6\" (UniqueName: \"kubernetes.io/projected/54ec10d5-fb56-44fe-9964-41474a46cac0-kube-api-access-6hvc6\") pod \"redhat-operators-lmdx2\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:37 crc kubenswrapper[4834]: I0121 16:08:37.896009 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:38 crc kubenswrapper[4834]: I0121 16:08:38.327186 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:08:38 crc kubenswrapper[4834]: E0121 16:08:38.328051 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:08:38 crc kubenswrapper[4834]: I0121 16:08:38.371587 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmdx2"] Jan 21 16:08:38 crc kubenswrapper[4834]: I0121 16:08:38.399050 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmdx2" event={"ID":"54ec10d5-fb56-44fe-9964-41474a46cac0","Type":"ContainerStarted","Data":"370e2bcf63573e4b8087afc8c02c165dcef65d32cdde0c0707f9ab66266bf438"} Jan 21 16:08:39 crc kubenswrapper[4834]: I0121 16:08:39.411838 4834 generic.go:334] "Generic (PLEG): container finished" podID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerID="7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755" exitCode=0 Jan 21 16:08:39 crc kubenswrapper[4834]: I0121 16:08:39.411969 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmdx2" event={"ID":"54ec10d5-fb56-44fe-9964-41474a46cac0","Type":"ContainerDied","Data":"7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755"} Jan 21 16:08:40 crc kubenswrapper[4834]: I0121 16:08:40.424268 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmdx2" event={"ID":"54ec10d5-fb56-44fe-9964-41474a46cac0","Type":"ContainerStarted","Data":"ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e"} Jan 21 16:08:43 crc kubenswrapper[4834]: I0121 16:08:43.454558 4834 generic.go:334] "Generic (PLEG): container finished" podID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerID="ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e" exitCode=0 Jan 21 16:08:43 crc kubenswrapper[4834]: I0121 16:08:43.454594 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmdx2" event={"ID":"54ec10d5-fb56-44fe-9964-41474a46cac0","Type":"ContainerDied","Data":"ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e"} Jan 21 16:08:44 crc kubenswrapper[4834]: I0121 16:08:44.467761 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmdx2" event={"ID":"54ec10d5-fb56-44fe-9964-41474a46cac0","Type":"ContainerStarted","Data":"aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1"} Jan 21 16:08:44 crc kubenswrapper[4834]: I0121 16:08:44.489523 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lmdx2" podStartSLOduration=2.752092624 podStartE2EDuration="7.489504319s" podCreationTimestamp="2026-01-21 16:08:37 +0000 UTC" firstStartedPulling="2026-01-21 16:08:39.415074547 +0000 UTC m=+5865.389423592" lastFinishedPulling="2026-01-21 16:08:44.152486242 +0000 UTC m=+5870.126835287" observedRunningTime="2026-01-21 16:08:44.485865005 +0000 UTC m=+5870.460214050" watchObservedRunningTime="2026-01-21 16:08:44.489504319 +0000 UTC m=+5870.463853364" Jan 21 16:08:47 crc kubenswrapper[4834]: I0121 16:08:47.896113 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:47 crc kubenswrapper[4834]: I0121 16:08:47.896733 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:48 crc kubenswrapper[4834]: I0121 16:08:48.938709 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmdx2" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerName="registry-server" probeResult="failure" output=< Jan 21 16:08:48 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 16:08:48 crc kubenswrapper[4834]: > Jan 21 16:08:49 crc kubenswrapper[4834]: I0121 16:08:49.324921 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:08:49 crc kubenswrapper[4834]: E0121 16:08:49.325713 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:08:57 crc kubenswrapper[4834]: I0121 16:08:57.947374 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:58 crc kubenswrapper[4834]: I0121 16:08:58.006265 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:08:58 crc kubenswrapper[4834]: I0121 16:08:58.191201 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmdx2"] Jan 21 16:08:59 crc kubenswrapper[4834]: I0121 16:08:59.604969 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lmdx2" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerName="registry-server" containerID="cri-o://aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1" gracePeriod=2 Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.062700 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.172825 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-catalog-content\") pod \"54ec10d5-fb56-44fe-9964-41474a46cac0\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.172910 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hvc6\" (UniqueName: \"kubernetes.io/projected/54ec10d5-fb56-44fe-9964-41474a46cac0-kube-api-access-6hvc6\") pod \"54ec10d5-fb56-44fe-9964-41474a46cac0\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.173021 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-utilities\") pod \"54ec10d5-fb56-44fe-9964-41474a46cac0\" (UID: \"54ec10d5-fb56-44fe-9964-41474a46cac0\") " Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.174839 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-utilities" (OuterVolumeSpecName: "utilities") pod "54ec10d5-fb56-44fe-9964-41474a46cac0" (UID: "54ec10d5-fb56-44fe-9964-41474a46cac0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.185319 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ec10d5-fb56-44fe-9964-41474a46cac0-kube-api-access-6hvc6" (OuterVolumeSpecName: "kube-api-access-6hvc6") pod "54ec10d5-fb56-44fe-9964-41474a46cac0" (UID: "54ec10d5-fb56-44fe-9964-41474a46cac0"). InnerVolumeSpecName "kube-api-access-6hvc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.276136 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hvc6\" (UniqueName: \"kubernetes.io/projected/54ec10d5-fb56-44fe-9964-41474a46cac0-kube-api-access-6hvc6\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.276206 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.294191 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54ec10d5-fb56-44fe-9964-41474a46cac0" (UID: "54ec10d5-fb56-44fe-9964-41474a46cac0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.377971 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ec10d5-fb56-44fe-9964-41474a46cac0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.616639 4834 generic.go:334] "Generic (PLEG): container finished" podID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerID="aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1" exitCode=0 Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.616683 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmdx2" event={"ID":"54ec10d5-fb56-44fe-9964-41474a46cac0","Type":"ContainerDied","Data":"aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1"} Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.616710 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmdx2" event={"ID":"54ec10d5-fb56-44fe-9964-41474a46cac0","Type":"ContainerDied","Data":"370e2bcf63573e4b8087afc8c02c165dcef65d32cdde0c0707f9ab66266bf438"} Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.616729 4834 scope.go:117] "RemoveContainer" containerID="aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.616749 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmdx2" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.645879 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmdx2"] Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.653829 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lmdx2"] Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.673245 4834 scope.go:117] "RemoveContainer" containerID="ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.701414 4834 scope.go:117] "RemoveContainer" containerID="7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.735265 4834 scope.go:117] "RemoveContainer" containerID="aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1" Jan 21 16:09:00 crc kubenswrapper[4834]: E0121 16:09:00.735703 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1\": container with ID starting with aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1 not found: ID does not exist" containerID="aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.735761 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1"} err="failed to get container status \"aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1\": rpc error: code = NotFound desc = could not find container \"aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1\": container with ID starting with aeecb2b6bd821a4cfd953184e71a2912a100206211fdac36b4061c88398aafc1 not found: ID does not exist" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.735793 4834 scope.go:117] "RemoveContainer" containerID="ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e" Jan 21 16:09:00 crc kubenswrapper[4834]: E0121 16:09:00.736147 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e\": container with ID starting with ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e not found: ID does not exist" containerID="ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.736185 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e"} err="failed to get container status \"ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e\": rpc error: code = NotFound desc = could not find container \"ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e\": container with ID starting with ea785cc79a58221171a2bd63457bb1f7ec26e717eb7db70349a6eeca6d8bf17e not found: ID does not exist" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.736211 4834 scope.go:117] "RemoveContainer" containerID="7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755" Jan 21 16:09:00 crc kubenswrapper[4834]: E0121 16:09:00.736514 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755\": container with ID starting with 7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755 not found: ID does not exist" containerID="7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755" Jan 21 16:09:00 crc kubenswrapper[4834]: I0121 16:09:00.736549 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755"} err="failed to get container status \"7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755\": rpc error: code = NotFound desc = could not find container \"7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755\": container with ID starting with 7cfde573c86b41645bc8c4d13b97cc0385d7d78724d28c91ba0828ec3f625755 not found: ID does not exist" Jan 21 16:09:01 crc kubenswrapper[4834]: I0121 16:09:01.324568 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:09:01 crc kubenswrapper[4834]: E0121 16:09:01.325154 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:09:02 crc kubenswrapper[4834]: I0121 16:09:02.338626 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" path="/var/lib/kubelet/pods/54ec10d5-fb56-44fe-9964-41474a46cac0/volumes" Jan 21 16:09:11 crc kubenswrapper[4834]: I0121 16:09:11.044539 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-lnfv6"] Jan 21 16:09:11 crc kubenswrapper[4834]: I0121 16:09:11.055794 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ff41-account-create-update-zr2nl"] Jan 21 16:09:11 crc kubenswrapper[4834]: I0121 16:09:11.066263 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-lnfv6"] Jan 21 16:09:11 crc kubenswrapper[4834]: I0121 16:09:11.078605 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ff41-account-create-update-zr2nl"] Jan 21 16:09:12 crc kubenswrapper[4834]: I0121 16:09:12.335861 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1692378f-f497-4913-b9e9-59e40af2100b" path="/var/lib/kubelet/pods/1692378f-f497-4913-b9e9-59e40af2100b/volumes" Jan 21 16:09:12 crc kubenswrapper[4834]: I0121 16:09:12.337153 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41e40b21-a32b-4aef-bcb1-5b5187d68abc" path="/var/lib/kubelet/pods/41e40b21-a32b-4aef-bcb1-5b5187d68abc/volumes" Jan 21 16:09:15 crc kubenswrapper[4834]: I0121 16:09:15.324753 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:09:15 crc kubenswrapper[4834]: E0121 16:09:15.325657 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:09:18 crc kubenswrapper[4834]: I0121 16:09:18.031787 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-mw4wp"] Jan 21 16:09:18 crc kubenswrapper[4834]: I0121 16:09:18.040483 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-mw4wp"] Jan 21 16:09:18 crc kubenswrapper[4834]: I0121 16:09:18.337567 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b5efbb-2116-40d4-8c4f-59a93f198024" path="/var/lib/kubelet/pods/98b5efbb-2116-40d4-8c4f-59a93f198024/volumes" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.244863 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ps9ls"] Jan 21 16:09:21 crc kubenswrapper[4834]: E0121 16:09:21.245314 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerName="registry-server" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.245328 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerName="registry-server" Jan 21 16:09:21 crc kubenswrapper[4834]: E0121 16:09:21.245349 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerName="extract-utilities" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.245355 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerName="extract-utilities" Jan 21 16:09:21 crc kubenswrapper[4834]: E0121 16:09:21.245371 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerName="extract-content" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.245377 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerName="extract-content" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.245642 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ec10d5-fb56-44fe-9964-41474a46cac0" containerName="registry-server" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.246498 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.248982 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2bbt2" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.249656 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.259219 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps9ls"] Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.291442 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5cg8v"] Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.293486 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.317146 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5cg8v"] Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.400321 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-var-run\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.400885 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-var-run-ovn\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.401447 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa38c942-b967-4029-8c2f-e7c54ab9cedb-scripts\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.401643 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7clp\" (UniqueName: \"kubernetes.io/projected/fa38c942-b967-4029-8c2f-e7c54ab9cedb-kube-api-access-t7clp\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.401746 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42nv\" (UniqueName: \"kubernetes.io/projected/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-kube-api-access-n42nv\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.401824 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-scripts\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.402018 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-var-log\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.402099 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-etc-ovs\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.402131 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-var-run\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.402177 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-var-log-ovn\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.402284 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-var-lib\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.504654 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa38c942-b967-4029-8c2f-e7c54ab9cedb-scripts\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.504778 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7clp\" (UniqueName: \"kubernetes.io/projected/fa38c942-b967-4029-8c2f-e7c54ab9cedb-kube-api-access-t7clp\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.505189 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42nv\" (UniqueName: \"kubernetes.io/projected/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-kube-api-access-n42nv\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.505238 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-scripts\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.505883 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-var-log\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507424 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa38c942-b967-4029-8c2f-e7c54ab9cedb-scripts\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507429 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-scripts\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507449 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-var-log\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507594 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-etc-ovs\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507625 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-var-run\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507653 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-var-log-ovn\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507803 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-var-lib\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507843 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-var-run\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507892 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-var-run\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507903 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-var-log-ovn\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.507999 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-var-run\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.508031 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-var-run-ovn\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.508070 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-var-lib\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.508174 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fa38c942-b967-4029-8c2f-e7c54ab9cedb-etc-ovs\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.508206 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-var-run-ovn\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.522373 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42nv\" (UniqueName: \"kubernetes.io/projected/fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0-kube-api-access-n42nv\") pod \"ovn-controller-ps9ls\" (UID: \"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0\") " pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.534109 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7clp\" (UniqueName: \"kubernetes.io/projected/fa38c942-b967-4029-8c2f-e7c54ab9cedb-kube-api-access-t7clp\") pod \"ovn-controller-ovs-5cg8v\" (UID: \"fa38c942-b967-4029-8c2f-e7c54ab9cedb\") " pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.568651 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:21 crc kubenswrapper[4834]: I0121 16:09:21.617096 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.068889 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps9ls"] Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.468334 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5cg8v"] Jan 21 16:09:22 crc kubenswrapper[4834]: W0121 16:09:22.473093 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa38c942_b967_4029_8c2f_e7c54ab9cedb.slice/crio-fd9fa883beeadf1b61681e27f3e20f4765a2fe4b4c6dbdd17f6c9d8de393aae1 WatchSource:0}: Error finding container fd9fa883beeadf1b61681e27f3e20f4765a2fe4b4c6dbdd17f6c9d8de393aae1: Status 404 returned error can't find the container with id fd9fa883beeadf1b61681e27f3e20f4765a2fe4b4c6dbdd17f6c9d8de393aae1 Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.824449 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cg8v" event={"ID":"fa38c942-b967-4029-8c2f-e7c54ab9cedb","Type":"ContainerStarted","Data":"260d7b490b6a0089a029158c63d722a70f7a92e758fb2dd2550e6df456c5fdea"} Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.824879 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cg8v" event={"ID":"fa38c942-b967-4029-8c2f-e7c54ab9cedb","Type":"ContainerStarted","Data":"fd9fa883beeadf1b61681e27f3e20f4765a2fe4b4c6dbdd17f6c9d8de393aae1"} Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.834754 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls" event={"ID":"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0","Type":"ContainerStarted","Data":"6fa233961db42feefc09db580bad0f900a31dbed7466e7548a92ecc3ca763d48"} Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.834803 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls" event={"ID":"fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0","Type":"ContainerStarted","Data":"e8cc31049e556af412e9318c59f350032eb4a57b08750d13107db57d87932afb"} Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.834940 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ps9ls" Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.864865 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fgmph"] Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.878037 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.889651 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.891490 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fgmph"] Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.907326 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ps9ls" podStartSLOduration=1.907300324 podStartE2EDuration="1.907300324s" podCreationTimestamp="2026-01-21 16:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:22.871016572 +0000 UTC m=+5908.845365627" watchObservedRunningTime="2026-01-21 16:09:22.907300324 +0000 UTC m=+5908.881649369" Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.952171 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/427e0c14-b79d-43fc-b5b5-ad41b83d9988-config\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.952366 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/427e0c14-b79d-43fc-b5b5-ad41b83d9988-ovs-rundir\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.952443 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r698\" (UniqueName: \"kubernetes.io/projected/427e0c14-b79d-43fc-b5b5-ad41b83d9988-kube-api-access-8r698\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:22 crc kubenswrapper[4834]: I0121 16:09:22.952545 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/427e0c14-b79d-43fc-b5b5-ad41b83d9988-ovn-rundir\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.054559 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/427e0c14-b79d-43fc-b5b5-ad41b83d9988-ovs-rundir\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.054651 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r698\" (UniqueName: \"kubernetes.io/projected/427e0c14-b79d-43fc-b5b5-ad41b83d9988-kube-api-access-8r698\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.054732 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/427e0c14-b79d-43fc-b5b5-ad41b83d9988-ovn-rundir\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.054762 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/427e0c14-b79d-43fc-b5b5-ad41b83d9988-config\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.054971 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/427e0c14-b79d-43fc-b5b5-ad41b83d9988-ovs-rundir\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.055076 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/427e0c14-b79d-43fc-b5b5-ad41b83d9988-ovn-rundir\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.055636 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/427e0c14-b79d-43fc-b5b5-ad41b83d9988-config\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.075321 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r698\" (UniqueName: \"kubernetes.io/projected/427e0c14-b79d-43fc-b5b5-ad41b83d9988-kube-api-access-8r698\") pod \"ovn-controller-metrics-fgmph\" (UID: \"427e0c14-b79d-43fc-b5b5-ad41b83d9988\") " pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.193520 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fgmph" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.579132 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-h6sx4"] Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.581166 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.596117 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-h6sx4"] Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.645863 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fgmph"] Jan 21 16:09:23 crc kubenswrapper[4834]: W0121 16:09:23.654867 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod427e0c14_b79d_43fc_b5b5_ad41b83d9988.slice/crio-3402d3ca41f4148b17ad016015da0d6776b079466c63181a5b2367d2f514df14 WatchSource:0}: Error finding container 3402d3ca41f4148b17ad016015da0d6776b079466c63181a5b2367d2f514df14: Status 404 returned error can't find the container with id 3402d3ca41f4148b17ad016015da0d6776b079466c63181a5b2367d2f514df14 Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.667845 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-operator-scripts\") pod \"octavia-db-create-h6sx4\" (UID: \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\") " pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.668035 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fztnc\" (UniqueName: \"kubernetes.io/projected/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-kube-api-access-fztnc\") pod \"octavia-db-create-h6sx4\" (UID: \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\") " pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.769749 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-operator-scripts\") pod \"octavia-db-create-h6sx4\" (UID: \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\") " pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.770023 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fztnc\" (UniqueName: \"kubernetes.io/projected/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-kube-api-access-fztnc\") pod \"octavia-db-create-h6sx4\" (UID: \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\") " pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.771077 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-operator-scripts\") pod \"octavia-db-create-h6sx4\" (UID: \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\") " pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.790986 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fztnc\" (UniqueName: \"kubernetes.io/projected/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-kube-api-access-fztnc\") pod \"octavia-db-create-h6sx4\" (UID: \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\") " pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.845601 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fgmph" event={"ID":"427e0c14-b79d-43fc-b5b5-ad41b83d9988","Type":"ContainerStarted","Data":"3402d3ca41f4148b17ad016015da0d6776b079466c63181a5b2367d2f514df14"} Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.847219 4834 generic.go:334] "Generic (PLEG): container finished" podID="fa38c942-b967-4029-8c2f-e7c54ab9cedb" containerID="260d7b490b6a0089a029158c63d722a70f7a92e758fb2dd2550e6df456c5fdea" exitCode=0 Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.847270 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cg8v" event={"ID":"fa38c942-b967-4029-8c2f-e7c54ab9cedb","Type":"ContainerDied","Data":"260d7b490b6a0089a029158c63d722a70f7a92e758fb2dd2550e6df456c5fdea"} Jan 21 16:09:23 crc kubenswrapper[4834]: I0121 16:09:23.897355 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.390769 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-h6sx4"] Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.856250 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fgmph" event={"ID":"427e0c14-b79d-43fc-b5b5-ad41b83d9988","Type":"ContainerStarted","Data":"ee858664d83f99a84c14666646389727462e9b1144ffd305dd17e2345d91f3ab"} Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.860460 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cg8v" event={"ID":"fa38c942-b967-4029-8c2f-e7c54ab9cedb","Type":"ContainerStarted","Data":"36643a57fea03e108d4fddabb0174a237d2ac5a8a8afd4fd557e27d6ce6106b0"} Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.860504 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cg8v" event={"ID":"fa38c942-b967-4029-8c2f-e7c54ab9cedb","Type":"ContainerStarted","Data":"af0f9c15019bddd9f70d26d357ffca47b811450b33c61b5dbc08b16cad7b2e61"} Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.860520 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.860532 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.864078 4834 generic.go:334] "Generic (PLEG): container finished" podID="267a7bbb-4e41-49ee-89c2-3e43db4ca52c" containerID="099b287e41765628facfe23daf5aa366db3153e5f7b6860e9f61cf04911ea383" exitCode=0 Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.864112 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-h6sx4" event={"ID":"267a7bbb-4e41-49ee-89c2-3e43db4ca52c","Type":"ContainerDied","Data":"099b287e41765628facfe23daf5aa366db3153e5f7b6860e9f61cf04911ea383"} Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.864133 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-h6sx4" event={"ID":"267a7bbb-4e41-49ee-89c2-3e43db4ca52c","Type":"ContainerStarted","Data":"932e9f43f623d577ee6f0717adb774ac1af08481d78812a55e17a1ac604ffa11"} Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.881828 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fgmph" podStartSLOduration=2.881800402 podStartE2EDuration="2.881800402s" podCreationTimestamp="2026-01-21 16:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:24.878312183 +0000 UTC m=+5910.852661228" watchObservedRunningTime="2026-01-21 16:09:24.881800402 +0000 UTC m=+5910.856149467" Jan 21 16:09:24 crc kubenswrapper[4834]: I0121 16:09:24.908173 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5cg8v" podStartSLOduration=3.908146724 podStartE2EDuration="3.908146724s" podCreationTimestamp="2026-01-21 16:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:24.902332713 +0000 UTC m=+5910.876681768" watchObservedRunningTime="2026-01-21 16:09:24.908146724 +0000 UTC m=+5910.882495779" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.123289 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-cdc0-account-create-update-9dqlg"] Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.126573 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.130042 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.153833 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-cdc0-account-create-update-9dqlg"] Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.302888 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrqh\" (UniqueName: \"kubernetes.io/projected/59880e0f-faf1-4ebf-873c-fe4782233147-kube-api-access-lfrqh\") pod \"octavia-cdc0-account-create-update-9dqlg\" (UID: \"59880e0f-faf1-4ebf-873c-fe4782233147\") " pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.303367 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59880e0f-faf1-4ebf-873c-fe4782233147-operator-scripts\") pod \"octavia-cdc0-account-create-update-9dqlg\" (UID: \"59880e0f-faf1-4ebf-873c-fe4782233147\") " pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.405728 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrqh\" (UniqueName: \"kubernetes.io/projected/59880e0f-faf1-4ebf-873c-fe4782233147-kube-api-access-lfrqh\") pod \"octavia-cdc0-account-create-update-9dqlg\" (UID: \"59880e0f-faf1-4ebf-873c-fe4782233147\") " pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.405891 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59880e0f-faf1-4ebf-873c-fe4782233147-operator-scripts\") pod \"octavia-cdc0-account-create-update-9dqlg\" (UID: \"59880e0f-faf1-4ebf-873c-fe4782233147\") " pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.406660 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59880e0f-faf1-4ebf-873c-fe4782233147-operator-scripts\") pod \"octavia-cdc0-account-create-update-9dqlg\" (UID: \"59880e0f-faf1-4ebf-873c-fe4782233147\") " pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.438840 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrqh\" (UniqueName: \"kubernetes.io/projected/59880e0f-faf1-4ebf-873c-fe4782233147-kube-api-access-lfrqh\") pod \"octavia-cdc0-account-create-update-9dqlg\" (UID: \"59880e0f-faf1-4ebf-873c-fe4782233147\") " pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.452169 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:25 crc kubenswrapper[4834]: I0121 16:09:25.947608 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-cdc0-account-create-update-9dqlg"] Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.317655 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.433362 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-operator-scripts\") pod \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\" (UID: \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\") " Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.433537 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fztnc\" (UniqueName: \"kubernetes.io/projected/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-kube-api-access-fztnc\") pod \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\" (UID: \"267a7bbb-4e41-49ee-89c2-3e43db4ca52c\") " Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.434260 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "267a7bbb-4e41-49ee-89c2-3e43db4ca52c" (UID: "267a7bbb-4e41-49ee-89c2-3e43db4ca52c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.434709 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.439383 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-kube-api-access-fztnc" (OuterVolumeSpecName: "kube-api-access-fztnc") pod "267a7bbb-4e41-49ee-89c2-3e43db4ca52c" (UID: "267a7bbb-4e41-49ee-89c2-3e43db4ca52c"). InnerVolumeSpecName "kube-api-access-fztnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.536513 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fztnc\" (UniqueName: \"kubernetes.io/projected/267a7bbb-4e41-49ee-89c2-3e43db4ca52c-kube-api-access-fztnc\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.890291 4834 generic.go:334] "Generic (PLEG): container finished" podID="59880e0f-faf1-4ebf-873c-fe4782233147" containerID="910ba4b8cdc2537e07fe4133312bdc68f34449a3e9f03681824303e6c1d5c474" exitCode=0 Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.890420 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-cdc0-account-create-update-9dqlg" event={"ID":"59880e0f-faf1-4ebf-873c-fe4782233147","Type":"ContainerDied","Data":"910ba4b8cdc2537e07fe4133312bdc68f34449a3e9f03681824303e6c1d5c474"} Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.890744 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-cdc0-account-create-update-9dqlg" event={"ID":"59880e0f-faf1-4ebf-873c-fe4782233147","Type":"ContainerStarted","Data":"495a5e3ba07809baaa9e315b6288d3dd51ae9eac5cb41de0ace13948c9b735f8"} Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.892709 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-h6sx4" event={"ID":"267a7bbb-4e41-49ee-89c2-3e43db4ca52c","Type":"ContainerDied","Data":"932e9f43f623d577ee6f0717adb774ac1af08481d78812a55e17a1ac604ffa11"} Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.892738 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="932e9f43f623d577ee6f0717adb774ac1af08481d78812a55e17a1ac604ffa11" Jan 21 16:09:26 crc kubenswrapper[4834]: I0121 16:09:26.892798 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-h6sx4" Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.291549 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.478885 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrqh\" (UniqueName: \"kubernetes.io/projected/59880e0f-faf1-4ebf-873c-fe4782233147-kube-api-access-lfrqh\") pod \"59880e0f-faf1-4ebf-873c-fe4782233147\" (UID: \"59880e0f-faf1-4ebf-873c-fe4782233147\") " Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.478991 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59880e0f-faf1-4ebf-873c-fe4782233147-operator-scripts\") pod \"59880e0f-faf1-4ebf-873c-fe4782233147\" (UID: \"59880e0f-faf1-4ebf-873c-fe4782233147\") " Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.479773 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59880e0f-faf1-4ebf-873c-fe4782233147-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59880e0f-faf1-4ebf-873c-fe4782233147" (UID: "59880e0f-faf1-4ebf-873c-fe4782233147"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.483595 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59880e0f-faf1-4ebf-873c-fe4782233147-kube-api-access-lfrqh" (OuterVolumeSpecName: "kube-api-access-lfrqh") pod "59880e0f-faf1-4ebf-873c-fe4782233147" (UID: "59880e0f-faf1-4ebf-873c-fe4782233147"). InnerVolumeSpecName "kube-api-access-lfrqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.581227 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrqh\" (UniqueName: \"kubernetes.io/projected/59880e0f-faf1-4ebf-873c-fe4782233147-kube-api-access-lfrqh\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.581266 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59880e0f-faf1-4ebf-873c-fe4782233147-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.918456 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-cdc0-account-create-update-9dqlg" event={"ID":"59880e0f-faf1-4ebf-873c-fe4782233147","Type":"ContainerDied","Data":"495a5e3ba07809baaa9e315b6288d3dd51ae9eac5cb41de0ace13948c9b735f8"} Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.918500 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="495a5e3ba07809baaa9e315b6288d3dd51ae9eac5cb41de0ace13948c9b735f8" Jan 21 16:09:28 crc kubenswrapper[4834]: I0121 16:09:28.918571 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-cdc0-account-create-update-9dqlg" Jan 21 16:09:30 crc kubenswrapper[4834]: I0121 16:09:30.325176 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:09:30 crc kubenswrapper[4834]: E0121 16:09:30.325746 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.038757 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2hgcg"] Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.048613 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2hgcg"] Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.630547 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-gs4pk"] Jan 21 16:09:31 crc kubenswrapper[4834]: E0121 16:09:31.631285 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267a7bbb-4e41-49ee-89c2-3e43db4ca52c" containerName="mariadb-database-create" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.631309 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="267a7bbb-4e41-49ee-89c2-3e43db4ca52c" containerName="mariadb-database-create" Jan 21 16:09:31 crc kubenswrapper[4834]: E0121 16:09:31.631343 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59880e0f-faf1-4ebf-873c-fe4782233147" containerName="mariadb-account-create-update" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.631355 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="59880e0f-faf1-4ebf-873c-fe4782233147" containerName="mariadb-account-create-update" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.631713 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="267a7bbb-4e41-49ee-89c2-3e43db4ca52c" containerName="mariadb-database-create" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.631753 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="59880e0f-faf1-4ebf-873c-fe4782233147" containerName="mariadb-account-create-update" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.632956 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.642943 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-gs4pk"] Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.689304 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p8tzh"] Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.691531 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.725249 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8tzh"] Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.752456 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4495\" (UniqueName: \"kubernetes.io/projected/3c869cb2-7419-4c97-b877-02533151d2b6-kube-api-access-l4495\") pod \"octavia-persistence-db-create-gs4pk\" (UID: \"3c869cb2-7419-4c97-b877-02533151d2b6\") " pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.752679 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c869cb2-7419-4c97-b877-02533151d2b6-operator-scripts\") pod \"octavia-persistence-db-create-gs4pk\" (UID: \"3c869cb2-7419-4c97-b877-02533151d2b6\") " pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.854962 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-utilities\") pod \"community-operators-p8tzh\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.855044 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-catalog-content\") pod \"community-operators-p8tzh\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.856094 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c869cb2-7419-4c97-b877-02533151d2b6-operator-scripts\") pod \"octavia-persistence-db-create-gs4pk\" (UID: \"3c869cb2-7419-4c97-b877-02533151d2b6\") " pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.856241 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4495\" (UniqueName: \"kubernetes.io/projected/3c869cb2-7419-4c97-b877-02533151d2b6-kube-api-access-l4495\") pod \"octavia-persistence-db-create-gs4pk\" (UID: \"3c869cb2-7419-4c97-b877-02533151d2b6\") " pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.856344 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xkjm\" (UniqueName: \"kubernetes.io/projected/2f4e66c4-27e4-4a10-9417-d175e2e05e79-kube-api-access-8xkjm\") pod \"community-operators-p8tzh\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.856396 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c869cb2-7419-4c97-b877-02533151d2b6-operator-scripts\") pod \"octavia-persistence-db-create-gs4pk\" (UID: \"3c869cb2-7419-4c97-b877-02533151d2b6\") " pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.874313 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4495\" (UniqueName: \"kubernetes.io/projected/3c869cb2-7419-4c97-b877-02533151d2b6-kube-api-access-l4495\") pod \"octavia-persistence-db-create-gs4pk\" (UID: \"3c869cb2-7419-4c97-b877-02533151d2b6\") " pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.958088 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-catalog-content\") pod \"community-operators-p8tzh\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.958228 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xkjm\" (UniqueName: \"kubernetes.io/projected/2f4e66c4-27e4-4a10-9417-d175e2e05e79-kube-api-access-8xkjm\") pod \"community-operators-p8tzh\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.958275 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-utilities\") pod \"community-operators-p8tzh\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.958509 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.958973 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-catalog-content\") pod \"community-operators-p8tzh\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.959461 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-utilities\") pod \"community-operators-p8tzh\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:31 crc kubenswrapper[4834]: I0121 16:09:31.986958 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xkjm\" (UniqueName: \"kubernetes.io/projected/2f4e66c4-27e4-4a10-9417-d175e2e05e79-kube-api-access-8xkjm\") pod \"community-operators-p8tzh\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.010426 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.255043 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-8947-account-create-update-9q7zv"] Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.259385 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.275469 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.280461 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-8947-account-create-update-9q7zv"] Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.353036 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2bf7832-b212-49cc-a6f9-3de2a895a837" path="/var/lib/kubelet/pods/b2bf7832-b212-49cc-a6f9-3de2a895a837/volumes" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.381229 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-operator-scripts\") pod \"octavia-8947-account-create-update-9q7zv\" (UID: \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\") " pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.381784 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2q65\" (UniqueName: \"kubernetes.io/projected/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-kube-api-access-f2q65\") pod \"octavia-8947-account-create-update-9q7zv\" (UID: \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\") " pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.482036 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-gs4pk"] Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.484736 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-operator-scripts\") pod \"octavia-8947-account-create-update-9q7zv\" (UID: \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\") " pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.485003 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2q65\" (UniqueName: \"kubernetes.io/projected/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-kube-api-access-f2q65\") pod \"octavia-8947-account-create-update-9q7zv\" (UID: \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\") " pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.486488 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-operator-scripts\") pod \"octavia-8947-account-create-update-9q7zv\" (UID: \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\") " pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.513395 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2q65\" (UniqueName: \"kubernetes.io/projected/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-kube-api-access-f2q65\") pod \"octavia-8947-account-create-update-9q7zv\" (UID: \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\") " pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.616579 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.732958 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p8tzh"] Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.963150 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8tzh" event={"ID":"2f4e66c4-27e4-4a10-9417-d175e2e05e79","Type":"ContainerStarted","Data":"387a75ff88c7596316ab838f6d3c7e2ccd1b9887a2b49102ce862e81e1a6e44b"} Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.963617 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8tzh" event={"ID":"2f4e66c4-27e4-4a10-9417-d175e2e05e79","Type":"ContainerStarted","Data":"ef22862f2746e1783b89ea60b8d65aeaef1996f3473df13f8fe436cccccf446a"} Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.968092 4834 generic.go:334] "Generic (PLEG): container finished" podID="3c869cb2-7419-4c97-b877-02533151d2b6" containerID="e75b8c6b826eb55cc41ffdaa3b317b76fe949daa820c67cfa52f872700bb156a" exitCode=0 Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.968134 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-gs4pk" event={"ID":"3c869cb2-7419-4c97-b877-02533151d2b6","Type":"ContainerDied","Data":"e75b8c6b826eb55cc41ffdaa3b317b76fe949daa820c67cfa52f872700bb156a"} Jan 21 16:09:32 crc kubenswrapper[4834]: I0121 16:09:32.968182 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-gs4pk" event={"ID":"3c869cb2-7419-4c97-b877-02533151d2b6","Type":"ContainerStarted","Data":"29c0a28f4f572540871535ec34525aa9421fe69fc499698a9a756c5b52bd8805"} Jan 21 16:09:33 crc kubenswrapper[4834]: I0121 16:09:33.249854 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-8947-account-create-update-9q7zv"] Jan 21 16:09:33 crc kubenswrapper[4834]: I0121 16:09:33.984052 4834 generic.go:334] "Generic (PLEG): container finished" podID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerID="387a75ff88c7596316ab838f6d3c7e2ccd1b9887a2b49102ce862e81e1a6e44b" exitCode=0 Jan 21 16:09:33 crc kubenswrapper[4834]: I0121 16:09:33.984106 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8tzh" event={"ID":"2f4e66c4-27e4-4a10-9417-d175e2e05e79","Type":"ContainerDied","Data":"387a75ff88c7596316ab838f6d3c7e2ccd1b9887a2b49102ce862e81e1a6e44b"} Jan 21 16:09:33 crc kubenswrapper[4834]: I0121 16:09:33.984511 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8tzh" event={"ID":"2f4e66c4-27e4-4a10-9417-d175e2e05e79","Type":"ContainerStarted","Data":"9c59bb042f7f13391d8b1b2c6e8a8b7cf105efcdcf121b1f36690458dec57797"} Jan 21 16:09:33 crc kubenswrapper[4834]: I0121 16:09:33.990697 4834 generic.go:334] "Generic (PLEG): container finished" podID="47d096b2-0fbc-4b87-8a6f-cb77217ced9d" containerID="24aaa50e53620921258aee6e62d15fe59150d20a1358882edbc5773f4c1fd305" exitCode=0 Jan 21 16:09:33 crc kubenswrapper[4834]: I0121 16:09:33.990957 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-8947-account-create-update-9q7zv" event={"ID":"47d096b2-0fbc-4b87-8a6f-cb77217ced9d","Type":"ContainerDied","Data":"24aaa50e53620921258aee6e62d15fe59150d20a1358882edbc5773f4c1fd305"} Jan 21 16:09:33 crc kubenswrapper[4834]: I0121 16:09:33.990986 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-8947-account-create-update-9q7zv" event={"ID":"47d096b2-0fbc-4b87-8a6f-cb77217ced9d","Type":"ContainerStarted","Data":"edc371ce6ecb498545e30d028476769c82cca521b82bcae682149b7cb8161cae"} Jan 21 16:09:34 crc kubenswrapper[4834]: I0121 16:09:34.403801 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:34 crc kubenswrapper[4834]: I0121 16:09:34.529079 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c869cb2-7419-4c97-b877-02533151d2b6-operator-scripts\") pod \"3c869cb2-7419-4c97-b877-02533151d2b6\" (UID: \"3c869cb2-7419-4c97-b877-02533151d2b6\") " Jan 21 16:09:34 crc kubenswrapper[4834]: I0121 16:09:34.529158 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4495\" (UniqueName: \"kubernetes.io/projected/3c869cb2-7419-4c97-b877-02533151d2b6-kube-api-access-l4495\") pod \"3c869cb2-7419-4c97-b877-02533151d2b6\" (UID: \"3c869cb2-7419-4c97-b877-02533151d2b6\") " Jan 21 16:09:34 crc kubenswrapper[4834]: I0121 16:09:34.529633 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c869cb2-7419-4c97-b877-02533151d2b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c869cb2-7419-4c97-b877-02533151d2b6" (UID: "3c869cb2-7419-4c97-b877-02533151d2b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:34 crc kubenswrapper[4834]: I0121 16:09:34.529777 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c869cb2-7419-4c97-b877-02533151d2b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:34 crc kubenswrapper[4834]: I0121 16:09:34.534852 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c869cb2-7419-4c97-b877-02533151d2b6-kube-api-access-l4495" (OuterVolumeSpecName: "kube-api-access-l4495") pod "3c869cb2-7419-4c97-b877-02533151d2b6" (UID: "3c869cb2-7419-4c97-b877-02533151d2b6"). InnerVolumeSpecName "kube-api-access-l4495". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:34 crc kubenswrapper[4834]: I0121 16:09:34.631164 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4495\" (UniqueName: \"kubernetes.io/projected/3c869cb2-7419-4c97-b877-02533151d2b6-kube-api-access-l4495\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.000958 4834 generic.go:334] "Generic (PLEG): container finished" podID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerID="9c59bb042f7f13391d8b1b2c6e8a8b7cf105efcdcf121b1f36690458dec57797" exitCode=0 Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.001025 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8tzh" event={"ID":"2f4e66c4-27e4-4a10-9417-d175e2e05e79","Type":"ContainerDied","Data":"9c59bb042f7f13391d8b1b2c6e8a8b7cf105efcdcf121b1f36690458dec57797"} Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.005635 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-gs4pk" event={"ID":"3c869cb2-7419-4c97-b877-02533151d2b6","Type":"ContainerDied","Data":"29c0a28f4f572540871535ec34525aa9421fe69fc499698a9a756c5b52bd8805"} Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.005798 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29c0a28f4f572540871535ec34525aa9421fe69fc499698a9a756c5b52bd8805" Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.005723 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-gs4pk" Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.410989 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.550301 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-operator-scripts\") pod \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\" (UID: \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\") " Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.550596 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2q65\" (UniqueName: \"kubernetes.io/projected/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-kube-api-access-f2q65\") pod \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\" (UID: \"47d096b2-0fbc-4b87-8a6f-cb77217ced9d\") " Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.551112 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47d096b2-0fbc-4b87-8a6f-cb77217ced9d" (UID: "47d096b2-0fbc-4b87-8a6f-cb77217ced9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.551548 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.558485 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-kube-api-access-f2q65" (OuterVolumeSpecName: "kube-api-access-f2q65") pod "47d096b2-0fbc-4b87-8a6f-cb77217ced9d" (UID: "47d096b2-0fbc-4b87-8a6f-cb77217ced9d"). InnerVolumeSpecName "kube-api-access-f2q65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:35 crc kubenswrapper[4834]: I0121 16:09:35.654511 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2q65\" (UniqueName: \"kubernetes.io/projected/47d096b2-0fbc-4b87-8a6f-cb77217ced9d-kube-api-access-f2q65\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:36 crc kubenswrapper[4834]: I0121 16:09:36.016790 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-8947-account-create-update-9q7zv" event={"ID":"47d096b2-0fbc-4b87-8a6f-cb77217ced9d","Type":"ContainerDied","Data":"edc371ce6ecb498545e30d028476769c82cca521b82bcae682149b7cb8161cae"} Jan 21 16:09:36 crc kubenswrapper[4834]: I0121 16:09:36.016843 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edc371ce6ecb498545e30d028476769c82cca521b82bcae682149b7cb8161cae" Jan 21 16:09:36 crc kubenswrapper[4834]: I0121 16:09:36.016944 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-8947-account-create-update-9q7zv" Jan 21 16:09:36 crc kubenswrapper[4834]: I0121 16:09:36.024731 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8tzh" event={"ID":"2f4e66c4-27e4-4a10-9417-d175e2e05e79","Type":"ContainerStarted","Data":"8ba69f4c1db8ccf9b81e8afa1ddc0b1022ab74a1532cbf8898d17b2b2147de05"} Jan 21 16:09:36 crc kubenswrapper[4834]: I0121 16:09:36.057533 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p8tzh" podStartSLOduration=2.5189035520000003 podStartE2EDuration="5.057496933s" podCreationTimestamp="2026-01-21 16:09:31 +0000 UTC" firstStartedPulling="2026-01-21 16:09:32.965525794 +0000 UTC m=+5918.939874839" lastFinishedPulling="2026-01-21 16:09:35.504119175 +0000 UTC m=+5921.478468220" observedRunningTime="2026-01-21 16:09:36.044302082 +0000 UTC m=+5922.018651117" watchObservedRunningTime="2026-01-21 16:09:36.057496933 +0000 UTC m=+5922.031845978" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.633687 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-6d7f5574f4-nnc6m"] Jan 21 16:09:37 crc kubenswrapper[4834]: E0121 16:09:37.634819 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c869cb2-7419-4c97-b877-02533151d2b6" containerName="mariadb-database-create" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.634848 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c869cb2-7419-4c97-b877-02533151d2b6" containerName="mariadb-database-create" Jan 21 16:09:37 crc kubenswrapper[4834]: E0121 16:09:37.634867 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d096b2-0fbc-4b87-8a6f-cb77217ced9d" containerName="mariadb-account-create-update" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.634875 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d096b2-0fbc-4b87-8a6f-cb77217ced9d" containerName="mariadb-account-create-update" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.635183 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c869cb2-7419-4c97-b877-02533151d2b6" containerName="mariadb-database-create" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.635224 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d096b2-0fbc-4b87-8a6f-cb77217ced9d" containerName="mariadb-account-create-update" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.644176 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.649225 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6d7f5574f4-nnc6m"] Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.649900 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-krmsv" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.659597 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.661469 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.803478 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-scripts\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.803524 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-config-data\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.803770 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-config-data-merged\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.804185 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-octavia-run\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.804354 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-combined-ca-bundle\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.907080 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-octavia-run\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.907203 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-combined-ca-bundle\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.907318 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-scripts\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.907356 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-config-data\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.907411 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-config-data-merged\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.907720 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-octavia-run\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.909144 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-config-data-merged\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.914679 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-combined-ca-bundle\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.914878 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-config-data\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.915671 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf162c87-ffd6-4a17-8ddf-d16cd28bfaca-scripts\") pod \"octavia-api-6d7f5574f4-nnc6m\" (UID: \"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca\") " pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:37 crc kubenswrapper[4834]: I0121 16:09:37.982477 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:38 crc kubenswrapper[4834]: I0121 16:09:38.587332 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6d7f5574f4-nnc6m"] Jan 21 16:09:38 crc kubenswrapper[4834]: W0121 16:09:38.597460 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf162c87_ffd6_4a17_8ddf_d16cd28bfaca.slice/crio-9e104c99391eb9de6a5848c8f77395c6821cceb792c3eb3bc30002ff05023c73 WatchSource:0}: Error finding container 9e104c99391eb9de6a5848c8f77395c6821cceb792c3eb3bc30002ff05023c73: Status 404 returned error can't find the container with id 9e104c99391eb9de6a5848c8f77395c6821cceb792c3eb3bc30002ff05023c73 Jan 21 16:09:39 crc kubenswrapper[4834]: I0121 16:09:39.092195 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6d7f5574f4-nnc6m" event={"ID":"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca","Type":"ContainerStarted","Data":"9e104c99391eb9de6a5848c8f77395c6821cceb792c3eb3bc30002ff05023c73"} Jan 21 16:09:42 crc kubenswrapper[4834]: I0121 16:09:42.011384 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:42 crc kubenswrapper[4834]: I0121 16:09:42.011762 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:42 crc kubenswrapper[4834]: I0121 16:09:42.078401 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:42 crc kubenswrapper[4834]: I0121 16:09:42.235918 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:42 crc kubenswrapper[4834]: I0121 16:09:42.340698 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8tzh"] Jan 21 16:09:44 crc kubenswrapper[4834]: I0121 16:09:44.197067 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p8tzh" podUID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerName="registry-server" containerID="cri-o://8ba69f4c1db8ccf9b81e8afa1ddc0b1022ab74a1532cbf8898d17b2b2147de05" gracePeriod=2 Jan 21 16:09:44 crc kubenswrapper[4834]: I0121 16:09:44.341274 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:09:44 crc kubenswrapper[4834]: E0121 16:09:44.342347 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:09:45 crc kubenswrapper[4834]: I0121 16:09:45.208882 4834 generic.go:334] "Generic (PLEG): container finished" podID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerID="8ba69f4c1db8ccf9b81e8afa1ddc0b1022ab74a1532cbf8898d17b2b2147de05" exitCode=0 Jan 21 16:09:45 crc kubenswrapper[4834]: I0121 16:09:45.208964 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8tzh" event={"ID":"2f4e66c4-27e4-4a10-9417-d175e2e05e79","Type":"ContainerDied","Data":"8ba69f4c1db8ccf9b81e8afa1ddc0b1022ab74a1532cbf8898d17b2b2147de05"} Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.612475 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.791442 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-utilities" (OuterVolumeSpecName: "utilities") pod "2f4e66c4-27e4-4a10-9417-d175e2e05e79" (UID: "2f4e66c4-27e4-4a10-9417-d175e2e05e79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.791757 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-utilities\") pod \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.791858 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-catalog-content\") pod \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.792089 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xkjm\" (UniqueName: \"kubernetes.io/projected/2f4e66c4-27e4-4a10-9417-d175e2e05e79-kube-api-access-8xkjm\") pod \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\" (UID: \"2f4e66c4-27e4-4a10-9417-d175e2e05e79\") " Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.793486 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.829138 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4e66c4-27e4-4a10-9417-d175e2e05e79-kube-api-access-8xkjm" (OuterVolumeSpecName: "kube-api-access-8xkjm") pod "2f4e66c4-27e4-4a10-9417-d175e2e05e79" (UID: "2f4e66c4-27e4-4a10-9417-d175e2e05e79"). InnerVolumeSpecName "kube-api-access-8xkjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.829359 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f4e66c4-27e4-4a10-9417-d175e2e05e79" (UID: "2f4e66c4-27e4-4a10-9417-d175e2e05e79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.894494 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xkjm\" (UniqueName: \"kubernetes.io/projected/2f4e66c4-27e4-4a10-9417-d175e2e05e79-kube-api-access-8xkjm\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:49 crc kubenswrapper[4834]: I0121 16:09:49.894533 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f4e66c4-27e4-4a10-9417-d175e2e05e79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:50 crc kubenswrapper[4834]: I0121 16:09:50.263020 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p8tzh" event={"ID":"2f4e66c4-27e4-4a10-9417-d175e2e05e79","Type":"ContainerDied","Data":"ef22862f2746e1783b89ea60b8d65aeaef1996f3473df13f8fe436cccccf446a"} Jan 21 16:09:50 crc kubenswrapper[4834]: I0121 16:09:50.263104 4834 scope.go:117] "RemoveContainer" containerID="8ba69f4c1db8ccf9b81e8afa1ddc0b1022ab74a1532cbf8898d17b2b2147de05" Jan 21 16:09:50 crc kubenswrapper[4834]: I0121 16:09:50.263128 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p8tzh" Jan 21 16:09:50 crc kubenswrapper[4834]: I0121 16:09:50.366793 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p8tzh"] Jan 21 16:09:50 crc kubenswrapper[4834]: I0121 16:09:50.386966 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p8tzh"] Jan 21 16:09:50 crc kubenswrapper[4834]: I0121 16:09:50.391192 4834 scope.go:117] "RemoveContainer" containerID="9c59bb042f7f13391d8b1b2c6e8a8b7cf105efcdcf121b1f36690458dec57797" Jan 21 16:09:50 crc kubenswrapper[4834]: I0121 16:09:50.422521 4834 scope.go:117] "RemoveContainer" containerID="387a75ff88c7596316ab838f6d3c7e2ccd1b9887a2b49102ce862e81e1a6e44b" Jan 21 16:09:51 crc kubenswrapper[4834]: I0121 16:09:51.280390 4834 generic.go:334] "Generic (PLEG): container finished" podID="cf162c87-ffd6-4a17-8ddf-d16cd28bfaca" containerID="37ff0fb3495fe60fd12e64b2c9c3007ce84a92163f8eb8a08d445240202fa125" exitCode=0 Jan 21 16:09:51 crc kubenswrapper[4834]: I0121 16:09:51.280503 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6d7f5574f4-nnc6m" event={"ID":"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca","Type":"ContainerDied","Data":"37ff0fb3495fe60fd12e64b2c9c3007ce84a92163f8eb8a08d445240202fa125"} Jan 21 16:09:52 crc kubenswrapper[4834]: I0121 16:09:52.291979 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6d7f5574f4-nnc6m" event={"ID":"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca","Type":"ContainerStarted","Data":"fb276d9526e5b37b60bf8ceca57107e1d94307c8320690b1829369b1a01f536d"} Jan 21 16:09:52 crc kubenswrapper[4834]: I0121 16:09:52.292313 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6d7f5574f4-nnc6m" event={"ID":"cf162c87-ffd6-4a17-8ddf-d16cd28bfaca","Type":"ContainerStarted","Data":"923732f7f8a4c56f94475a2c79024d09afc869eb0aa2277dd88703167384d018"} Jan 21 16:09:52 crc kubenswrapper[4834]: I0121 16:09:52.292506 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:52 crc kubenswrapper[4834]: I0121 16:09:52.292522 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:09:52 crc kubenswrapper[4834]: I0121 16:09:52.323665 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-6d7f5574f4-nnc6m" podStartSLOduration=3.6654836509999997 podStartE2EDuration="15.323646829s" podCreationTimestamp="2026-01-21 16:09:37 +0000 UTC" firstStartedPulling="2026-01-21 16:09:38.599956424 +0000 UTC m=+5924.574305479" lastFinishedPulling="2026-01-21 16:09:50.258119612 +0000 UTC m=+5936.232468657" observedRunningTime="2026-01-21 16:09:52.313520922 +0000 UTC m=+5938.287869967" watchObservedRunningTime="2026-01-21 16:09:52.323646829 +0000 UTC m=+5938.297995874" Jan 21 16:09:52 crc kubenswrapper[4834]: I0121 16:09:52.341333 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" path="/var/lib/kubelet/pods/2f4e66c4-27e4-4a10-9417-d175e2e05e79/volumes" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.615161 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ps9ls" podUID="fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:09:56 crc kubenswrapper[4834]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:09:56 crc kubenswrapper[4834]: > Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.666541 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.668340 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5cg8v" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.813165 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ps9ls-config-6sv7s"] Jan 21 16:09:56 crc kubenswrapper[4834]: E0121 16:09:56.813692 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerName="extract-utilities" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.813719 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerName="extract-utilities" Jan 21 16:09:56 crc kubenswrapper[4834]: E0121 16:09:56.813746 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerName="registry-server" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.813755 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerName="registry-server" Jan 21 16:09:56 crc kubenswrapper[4834]: E0121 16:09:56.813784 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerName="extract-content" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.813795 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerName="extract-content" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.814073 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4e66c4-27e4-4a10-9417-d175e2e05e79" containerName="registry-server" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.814920 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.819268 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.842284 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps9ls-config-6sv7s"] Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.849491 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-additional-scripts\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.849636 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-log-ovn\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.849730 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run-ovn\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.849811 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-scripts\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.849838 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtxfh\" (UniqueName: \"kubernetes.io/projected/342c6e9e-75da-4c61-841e-90406bdd56fc-kube-api-access-qtxfh\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.849877 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.951778 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run-ovn\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.951849 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-scripts\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.951871 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtxfh\" (UniqueName: \"kubernetes.io/projected/342c6e9e-75da-4c61-841e-90406bdd56fc-kube-api-access-qtxfh\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.951893 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.951987 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-additional-scripts\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.952037 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-log-ovn\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.952626 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-log-ovn\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.952688 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run-ovn\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.954490 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-scripts\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.954829 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.955318 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-additional-scripts\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:56 crc kubenswrapper[4834]: I0121 16:09:56.976789 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtxfh\" (UniqueName: \"kubernetes.io/projected/342c6e9e-75da-4c61-841e-90406bdd56fc-kube-api-access-qtxfh\") pod \"ovn-controller-ps9ls-config-6sv7s\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:57 crc kubenswrapper[4834]: I0121 16:09:57.138616 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:09:57 crc kubenswrapper[4834]: I0121 16:09:57.707350 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps9ls-config-6sv7s"] Jan 21 16:09:57 crc kubenswrapper[4834]: W0121 16:09:57.715231 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod342c6e9e_75da_4c61_841e_90406bdd56fc.slice/crio-dc8c1b2ff43aaef5687089452d3d043005ecb25a995fb963d8994b72496586ed WatchSource:0}: Error finding container dc8c1b2ff43aaef5687089452d3d043005ecb25a995fb963d8994b72496586ed: Status 404 returned error can't find the container with id dc8c1b2ff43aaef5687089452d3d043005ecb25a995fb963d8994b72496586ed Jan 21 16:09:58 crc kubenswrapper[4834]: I0121 16:09:58.366882 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls-config-6sv7s" event={"ID":"342c6e9e-75da-4c61-841e-90406bdd56fc","Type":"ContainerStarted","Data":"7ce4cdb8d6434e098aaf222c71f3d4796c8b589cfec5d1888b581ee0f0462973"} Jan 21 16:09:58 crc kubenswrapper[4834]: I0121 16:09:58.367265 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls-config-6sv7s" event={"ID":"342c6e9e-75da-4c61-841e-90406bdd56fc","Type":"ContainerStarted","Data":"dc8c1b2ff43aaef5687089452d3d043005ecb25a995fb963d8994b72496586ed"} Jan 21 16:09:58 crc kubenswrapper[4834]: I0121 16:09:58.386611 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ps9ls-config-6sv7s" podStartSLOduration=2.38658071 podStartE2EDuration="2.38658071s" podCreationTimestamp="2026-01-21 16:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:58.382260226 +0000 UTC m=+5944.356609271" watchObservedRunningTime="2026-01-21 16:09:58.38658071 +0000 UTC m=+5944.360929755" Jan 21 16:09:59 crc kubenswrapper[4834]: I0121 16:09:59.324756 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:09:59 crc kubenswrapper[4834]: E0121 16:09:59.325365 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:09:59 crc kubenswrapper[4834]: I0121 16:09:59.379203 4834 generic.go:334] "Generic (PLEG): container finished" podID="342c6e9e-75da-4c61-841e-90406bdd56fc" containerID="7ce4cdb8d6434e098aaf222c71f3d4796c8b589cfec5d1888b581ee0f0462973" exitCode=0 Jan 21 16:09:59 crc kubenswrapper[4834]: I0121 16:09:59.379259 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls-config-6sv7s" event={"ID":"342c6e9e-75da-4c61-841e-90406bdd56fc","Type":"ContainerDied","Data":"7ce4cdb8d6434e098aaf222c71f3d4796c8b589cfec5d1888b581ee0f0462973"} Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.271113 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-9gdbz"] Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.272903 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.276343 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.276979 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.286779 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-9gdbz"] Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.314672 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.451470 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c78f244f-a953-49ad-b632-96f0ec0f75ee-config-data\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.451801 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c78f244f-a953-49ad-b632-96f0ec0f75ee-hm-ports\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.451838 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c78f244f-a953-49ad-b632-96f0ec0f75ee-scripts\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.451875 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c78f244f-a953-49ad-b632-96f0ec0f75ee-config-data-merged\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.554165 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c78f244f-a953-49ad-b632-96f0ec0f75ee-scripts\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.554249 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c78f244f-a953-49ad-b632-96f0ec0f75ee-config-data-merged\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.554385 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c78f244f-a953-49ad-b632-96f0ec0f75ee-config-data\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.554458 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c78f244f-a953-49ad-b632-96f0ec0f75ee-hm-ports\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.555190 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/c78f244f-a953-49ad-b632-96f0ec0f75ee-hm-ports\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.555434 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c78f244f-a953-49ad-b632-96f0ec0f75ee-config-data-merged\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.565679 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c78f244f-a953-49ad-b632-96f0ec0f75ee-config-data\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.567883 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c78f244f-a953-49ad-b632-96f0ec0f75ee-scripts\") pod \"octavia-rsyslog-9gdbz\" (UID: \"c78f244f-a953-49ad-b632-96f0ec0f75ee\") " pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.636597 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.746011 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.868750 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-scripts\") pod \"342c6e9e-75da-4c61-841e-90406bdd56fc\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.868813 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run-ovn\") pod \"342c6e9e-75da-4c61-841e-90406bdd56fc\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.868871 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-additional-scripts\") pod \"342c6e9e-75da-4c61-841e-90406bdd56fc\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.868940 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-log-ovn\") pod \"342c6e9e-75da-4c61-841e-90406bdd56fc\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.869113 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtxfh\" (UniqueName: \"kubernetes.io/projected/342c6e9e-75da-4c61-841e-90406bdd56fc-kube-api-access-qtxfh\") pod \"342c6e9e-75da-4c61-841e-90406bdd56fc\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.869185 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run\") pod \"342c6e9e-75da-4c61-841e-90406bdd56fc\" (UID: \"342c6e9e-75da-4c61-841e-90406bdd56fc\") " Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870016 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "342c6e9e-75da-4c61-841e-90406bdd56fc" (UID: "342c6e9e-75da-4c61-841e-90406bdd56fc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870037 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "342c6e9e-75da-4c61-841e-90406bdd56fc" (UID: "342c6e9e-75da-4c61-841e-90406bdd56fc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870062 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "342c6e9e-75da-4c61-841e-90406bdd56fc" (UID: "342c6e9e-75da-4c61-841e-90406bdd56fc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870157 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run" (OuterVolumeSpecName: "var-run") pod "342c6e9e-75da-4c61-841e-90406bdd56fc" (UID: "342c6e9e-75da-4c61-841e-90406bdd56fc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870492 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-scripts" (OuterVolumeSpecName: "scripts") pod "342c6e9e-75da-4c61-841e-90406bdd56fc" (UID: "342c6e9e-75da-4c61-841e-90406bdd56fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870642 4834 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870662 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870673 4834 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870683 4834 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/342c6e9e-75da-4c61-841e-90406bdd56fc-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.870696 4834 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/342c6e9e-75da-4c61-841e-90406bdd56fc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.874075 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342c6e9e-75da-4c61-841e-90406bdd56fc-kube-api-access-qtxfh" (OuterVolumeSpecName: "kube-api-access-qtxfh") pod "342c6e9e-75da-4c61-841e-90406bdd56fc" (UID: "342c6e9e-75da-4c61-841e-90406bdd56fc"). InnerVolumeSpecName "kube-api-access-qtxfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:00 crc kubenswrapper[4834]: I0121 16:10:00.975436 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtxfh\" (UniqueName: \"kubernetes.io/projected/342c6e9e-75da-4c61-841e-90406bdd56fc-kube-api-access-qtxfh\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.023513 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-pf6wm"] Jan 21 16:10:01 crc kubenswrapper[4834]: E0121 16:10:01.023905 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342c6e9e-75da-4c61-841e-90406bdd56fc" containerName="ovn-config" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.023920 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="342c6e9e-75da-4c61-841e-90406bdd56fc" containerName="ovn-config" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.024147 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="342c6e9e-75da-4c61-841e-90406bdd56fc" containerName="ovn-config" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.025094 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.027246 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.047209 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-pf6wm"] Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.077530 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f90ca8c2-ebc0-4868-8582-b897995b211d-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-pf6wm\" (UID: \"f90ca8c2-ebc0-4868-8582-b897995b211d\") " pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.077740 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f90ca8c2-ebc0-4868-8582-b897995b211d-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-pf6wm\" (UID: \"f90ca8c2-ebc0-4868-8582-b897995b211d\") " pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.179192 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f90ca8c2-ebc0-4868-8582-b897995b211d-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-pf6wm\" (UID: \"f90ca8c2-ebc0-4868-8582-b897995b211d\") " pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.179710 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f90ca8c2-ebc0-4868-8582-b897995b211d-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-pf6wm\" (UID: \"f90ca8c2-ebc0-4868-8582-b897995b211d\") " pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.179786 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f90ca8c2-ebc0-4868-8582-b897995b211d-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-pf6wm\" (UID: \"f90ca8c2-ebc0-4868-8582-b897995b211d\") " pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.192399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f90ca8c2-ebc0-4868-8582-b897995b211d-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-pf6wm\" (UID: \"f90ca8c2-ebc0-4868-8582-b897995b211d\") " pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.255767 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-9gdbz"] Jan 21 16:10:01 crc kubenswrapper[4834]: W0121 16:10:01.263381 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc78f244f_a953_49ad_b632_96f0ec0f75ee.slice/crio-f5567854de8472a55ced15f303f381f8a533670c3541307777ab0740173d3a0e WatchSource:0}: Error finding container f5567854de8472a55ced15f303f381f8a533670c3541307777ab0740173d3a0e: Status 404 returned error can't find the container with id f5567854de8472a55ced15f303f381f8a533670c3541307777ab0740173d3a0e Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.361766 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.421635 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls-config-6sv7s" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.423096 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls-config-6sv7s" event={"ID":"342c6e9e-75da-4c61-841e-90406bdd56fc","Type":"ContainerDied","Data":"dc8c1b2ff43aaef5687089452d3d043005ecb25a995fb963d8994b72496586ed"} Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.423151 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc8c1b2ff43aaef5687089452d3d043005ecb25a995fb963d8994b72496586ed" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.429412 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-9gdbz" event={"ID":"c78f244f-a953-49ad-b632-96f0ec0f75ee","Type":"ContainerStarted","Data":"f5567854de8472a55ced15f303f381f8a533670c3541307777ab0740173d3a0e"} Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.480184 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ps9ls-config-6sv7s"] Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.492858 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ps9ls-config-6sv7s"] Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.522182 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ps9ls-config-2zgds"] Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.523989 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.526805 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.534797 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps9ls-config-2zgds"] Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.594094 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-scripts\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.594181 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run-ovn\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.594220 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-additional-scripts\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.594248 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtjf\" (UniqueName: \"kubernetes.io/projected/e456ecd7-82fb-40b1-9f44-04e2484ad29f-kube-api-access-wgtjf\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.594280 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.594305 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-log-ovn\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.673282 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ps9ls" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.698151 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-scripts\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.698231 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run-ovn\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.698274 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-additional-scripts\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.698312 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgtjf\" (UniqueName: \"kubernetes.io/projected/e456ecd7-82fb-40b1-9f44-04e2484ad29f-kube-api-access-wgtjf\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.698357 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.698396 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-log-ovn\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.698993 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run-ovn\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.699096 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-log-ovn\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.699207 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.699402 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-additional-scripts\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.704397 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-scripts\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.728204 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgtjf\" (UniqueName: \"kubernetes.io/projected/e456ecd7-82fb-40b1-9f44-04e2484ad29f-kube-api-access-wgtjf\") pod \"ovn-controller-ps9ls-config-2zgds\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:01 crc kubenswrapper[4834]: I0121 16:10:01.959115 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.044973 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-pf6wm"] Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.202956 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-sc5zg"] Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.204694 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.207477 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.220921 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-sc5zg"] Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.314515 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-combined-ca-bundle\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.314637 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-config-data\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.314754 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-scripts\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.314886 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/af5fa72e-0703-40bf-9041-4b3b21899793-config-data-merged\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.338145 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="342c6e9e-75da-4c61-841e-90406bdd56fc" path="/var/lib/kubelet/pods/342c6e9e-75da-4c61-841e-90406bdd56fc/volumes" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.416523 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-config-data\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.417049 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-scripts\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.417144 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/af5fa72e-0703-40bf-9041-4b3b21899793-config-data-merged\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.417211 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-combined-ca-bundle\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.417642 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/af5fa72e-0703-40bf-9041-4b3b21899793-config-data-merged\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.424245 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-combined-ca-bundle\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.424331 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-config-data\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.424786 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-scripts\") pod \"octavia-db-sync-sc5zg\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.447669 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" event={"ID":"f90ca8c2-ebc0-4868-8582-b897995b211d","Type":"ContainerStarted","Data":"7bb4d7310cbc5c02f56765aeda4e49dc35ae385257083697fd4166fbb14b72ad"} Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.457304 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ps9ls-config-2zgds"] Jan 21 16:10:02 crc kubenswrapper[4834]: W0121 16:10:02.459906 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode456ecd7_82fb_40b1_9f44_04e2484ad29f.slice/crio-984129d779d5a1dc1b6e34074389dccfb8bbfb626823a6f2146d620e7e056eda WatchSource:0}: Error finding container 984129d779d5a1dc1b6e34074389dccfb8bbfb626823a6f2146d620e7e056eda: Status 404 returned error can't find the container with id 984129d779d5a1dc1b6e34074389dccfb8bbfb626823a6f2146d620e7e056eda Jan 21 16:10:02 crc kubenswrapper[4834]: I0121 16:10:02.533389 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:03 crc kubenswrapper[4834]: I0121 16:10:03.085892 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-sc5zg"] Jan 21 16:10:03 crc kubenswrapper[4834]: I0121 16:10:03.474830 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls-config-2zgds" event={"ID":"e456ecd7-82fb-40b1-9f44-04e2484ad29f","Type":"ContainerStarted","Data":"077572a5bed0cb4a17cbf4f9983f73810dbb8ab174a87932e965df8f7d6dc4f8"} Jan 21 16:10:03 crc kubenswrapper[4834]: I0121 16:10:03.475148 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls-config-2zgds" event={"ID":"e456ecd7-82fb-40b1-9f44-04e2484ad29f","Type":"ContainerStarted","Data":"984129d779d5a1dc1b6e34074389dccfb8bbfb626823a6f2146d620e7e056eda"} Jan 21 16:10:03 crc kubenswrapper[4834]: I0121 16:10:03.487670 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-sc5zg" event={"ID":"af5fa72e-0703-40bf-9041-4b3b21899793","Type":"ContainerStarted","Data":"054d6e2b1c9328d780de312f4bc7eff0dcc55137a7cf54c8e1b75050d82cf231"} Jan 21 16:10:03 crc kubenswrapper[4834]: I0121 16:10:03.506473 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ps9ls-config-2zgds" podStartSLOduration=2.506446362 podStartE2EDuration="2.506446362s" podCreationTimestamp="2026-01-21 16:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:03.501286911 +0000 UTC m=+5949.475635956" watchObservedRunningTime="2026-01-21 16:10:03.506446362 +0000 UTC m=+5949.480795417" Jan 21 16:10:04 crc kubenswrapper[4834]: I0121 16:10:04.512867 4834 generic.go:334] "Generic (PLEG): container finished" podID="e456ecd7-82fb-40b1-9f44-04e2484ad29f" containerID="077572a5bed0cb4a17cbf4f9983f73810dbb8ab174a87932e965df8f7d6dc4f8" exitCode=0 Jan 21 16:10:04 crc kubenswrapper[4834]: I0121 16:10:04.513048 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls-config-2zgds" event={"ID":"e456ecd7-82fb-40b1-9f44-04e2484ad29f","Type":"ContainerDied","Data":"077572a5bed0cb4a17cbf4f9983f73810dbb8ab174a87932e965df8f7d6dc4f8"} Jan 21 16:10:04 crc kubenswrapper[4834]: I0121 16:10:04.520459 4834 generic.go:334] "Generic (PLEG): container finished" podID="af5fa72e-0703-40bf-9041-4b3b21899793" containerID="027c83181be5d21f438c1ea3bc0a003d25e6d75fac6d2814f1c8e2817a1b9564" exitCode=0 Jan 21 16:10:04 crc kubenswrapper[4834]: I0121 16:10:04.520510 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-sc5zg" event={"ID":"af5fa72e-0703-40bf-9041-4b3b21899793","Type":"ContainerDied","Data":"027c83181be5d21f438c1ea3bc0a003d25e6d75fac6d2814f1c8e2817a1b9564"} Jan 21 16:10:05 crc kubenswrapper[4834]: I0121 16:10:05.530684 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-9gdbz" event={"ID":"c78f244f-a953-49ad-b632-96f0ec0f75ee","Type":"ContainerStarted","Data":"97560c5b254d81b9bf55d8c1f5be6a6daf333704891af5190195d819949dc6d2"} Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:05.999770 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.110886 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-additional-scripts\") pod \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.111470 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run\") pod \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.111527 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-log-ovn\") pod \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.111656 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgtjf\" (UniqueName: \"kubernetes.io/projected/e456ecd7-82fb-40b1-9f44-04e2484ad29f-kube-api-access-wgtjf\") pod \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.111729 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run-ovn\") pod \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.111808 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-scripts\") pod \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\" (UID: \"e456ecd7-82fb-40b1-9f44-04e2484ad29f\") " Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.112232 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e456ecd7-82fb-40b1-9f44-04e2484ad29f" (UID: "e456ecd7-82fb-40b1-9f44-04e2484ad29f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.112738 4834 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.112769 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e456ecd7-82fb-40b1-9f44-04e2484ad29f" (UID: "e456ecd7-82fb-40b1-9f44-04e2484ad29f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.112790 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run" (OuterVolumeSpecName: "var-run") pod "e456ecd7-82fb-40b1-9f44-04e2484ad29f" (UID: "e456ecd7-82fb-40b1-9f44-04e2484ad29f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.113356 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-scripts" (OuterVolumeSpecName: "scripts") pod "e456ecd7-82fb-40b1-9f44-04e2484ad29f" (UID: "e456ecd7-82fb-40b1-9f44-04e2484ad29f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.113393 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e456ecd7-82fb-40b1-9f44-04e2484ad29f" (UID: "e456ecd7-82fb-40b1-9f44-04e2484ad29f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.118769 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e456ecd7-82fb-40b1-9f44-04e2484ad29f-kube-api-access-wgtjf" (OuterVolumeSpecName: "kube-api-access-wgtjf") pod "e456ecd7-82fb-40b1-9f44-04e2484ad29f" (UID: "e456ecd7-82fb-40b1-9f44-04e2484ad29f"). InnerVolumeSpecName "kube-api-access-wgtjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.216026 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgtjf\" (UniqueName: \"kubernetes.io/projected/e456ecd7-82fb-40b1-9f44-04e2484ad29f-kube-api-access-wgtjf\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.216112 4834 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.216125 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e456ecd7-82fb-40b1-9f44-04e2484ad29f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.216134 4834 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.216143 4834 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e456ecd7-82fb-40b1-9f44-04e2484ad29f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.563078 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ps9ls-config-2zgds" event={"ID":"e456ecd7-82fb-40b1-9f44-04e2484ad29f","Type":"ContainerDied","Data":"984129d779d5a1dc1b6e34074389dccfb8bbfb626823a6f2146d620e7e056eda"} Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.563129 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="984129d779d5a1dc1b6e34074389dccfb8bbfb626823a6f2146d620e7e056eda" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.563097 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ps9ls-config-2zgds" Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.582537 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-sc5zg" event={"ID":"af5fa72e-0703-40bf-9041-4b3b21899793","Type":"ContainerStarted","Data":"1b806932c50ec5b289ce53745d4e2ca5e2aa36f53445ef99326f711d10e5ac8c"} Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.587298 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ps9ls-config-2zgds"] Jan 21 16:10:06 crc kubenswrapper[4834]: I0121 16:10:06.607807 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ps9ls-config-2zgds"] Jan 21 16:10:08 crc kubenswrapper[4834]: I0121 16:10:08.338152 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e456ecd7-82fb-40b1-9f44-04e2484ad29f" path="/var/lib/kubelet/pods/e456ecd7-82fb-40b1-9f44-04e2484ad29f/volumes" Jan 21 16:10:08 crc kubenswrapper[4834]: I0121 16:10:08.541542 4834 scope.go:117] "RemoveContainer" containerID="9be0980a5078844f42c26f612b30795e1eba97e238949612ba5946b8805399c1" Jan 21 16:10:08 crc kubenswrapper[4834]: I0121 16:10:08.598714 4834 scope.go:117] "RemoveContainer" containerID="2e572f993d20c359be8a8ae202a2aa3a052918ace0b71cc1efb00dc08d908beb" Jan 21 16:10:08 crc kubenswrapper[4834]: I0121 16:10:08.653064 4834 scope.go:117] "RemoveContainer" containerID="59ad7f30330276a6977ec5c3ae1386eb763e2ad6b682bf65ed5a31be3fee391c" Jan 21 16:10:08 crc kubenswrapper[4834]: I0121 16:10:08.721136 4834 scope.go:117] "RemoveContainer" containerID="f2ca19dde3c8bbb3dde0d9524f833fa10a93114b991abbcb19fd1146bb8644b3" Jan 21 16:10:09 crc kubenswrapper[4834]: I0121 16:10:09.645123 4834 generic.go:334] "Generic (PLEG): container finished" podID="c78f244f-a953-49ad-b632-96f0ec0f75ee" containerID="97560c5b254d81b9bf55d8c1f5be6a6daf333704891af5190195d819949dc6d2" exitCode=0 Jan 21 16:10:09 crc kubenswrapper[4834]: I0121 16:10:09.645639 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-9gdbz" event={"ID":"c78f244f-a953-49ad-b632-96f0ec0f75ee","Type":"ContainerDied","Data":"97560c5b254d81b9bf55d8c1f5be6a6daf333704891af5190195d819949dc6d2"} Jan 21 16:10:09 crc kubenswrapper[4834]: I0121 16:10:09.679620 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-sc5zg" podStartSLOduration=7.6795935029999995 podStartE2EDuration="7.679593503s" podCreationTimestamp="2026-01-21 16:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:09.668119565 +0000 UTC m=+5955.642468620" watchObservedRunningTime="2026-01-21 16:10:09.679593503 +0000 UTC m=+5955.653942548" Jan 21 16:10:12 crc kubenswrapper[4834]: I0121 16:10:12.696101 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:10:12 crc kubenswrapper[4834]: I0121 16:10:12.717689 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6d7f5574f4-nnc6m" Jan 21 16:10:14 crc kubenswrapper[4834]: I0121 16:10:14.336045 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:10:14 crc kubenswrapper[4834]: E0121 16:10:14.337573 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:10:22 crc kubenswrapper[4834]: I0121 16:10:22.795849 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-9gdbz" event={"ID":"c78f244f-a953-49ad-b632-96f0ec0f75ee","Type":"ContainerStarted","Data":"249b4181608c391458ca352fc0b8cd4a636b7315cee554176a239d3ce561e2a6"} Jan 21 16:10:22 crc kubenswrapper[4834]: I0121 16:10:22.796690 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:22 crc kubenswrapper[4834]: I0121 16:10:22.799038 4834 generic.go:334] "Generic (PLEG): container finished" podID="af5fa72e-0703-40bf-9041-4b3b21899793" containerID="1b806932c50ec5b289ce53745d4e2ca5e2aa36f53445ef99326f711d10e5ac8c" exitCode=0 Jan 21 16:10:22 crc kubenswrapper[4834]: I0121 16:10:22.799040 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-sc5zg" event={"ID":"af5fa72e-0703-40bf-9041-4b3b21899793","Type":"ContainerDied","Data":"1b806932c50ec5b289ce53745d4e2ca5e2aa36f53445ef99326f711d10e5ac8c"} Jan 21 16:10:22 crc kubenswrapper[4834]: I0121 16:10:22.802282 4834 generic.go:334] "Generic (PLEG): container finished" podID="f90ca8c2-ebc0-4868-8582-b897995b211d" containerID="252675b24a422f5c846cf96726e0eac28266d62401e48c4cc473014c11f87f2a" exitCode=0 Jan 21 16:10:22 crc kubenswrapper[4834]: I0121 16:10:22.802311 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" event={"ID":"f90ca8c2-ebc0-4868-8582-b897995b211d","Type":"ContainerDied","Data":"252675b24a422f5c846cf96726e0eac28266d62401e48c4cc473014c11f87f2a"} Jan 21 16:10:22 crc kubenswrapper[4834]: I0121 16:10:22.820150 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-9gdbz" podStartSLOduration=2.199066373 podStartE2EDuration="22.82012958s" podCreationTimestamp="2026-01-21 16:10:00 +0000 UTC" firstStartedPulling="2026-01-21 16:10:01.266301725 +0000 UTC m=+5947.240650770" lastFinishedPulling="2026-01-21 16:10:21.887364932 +0000 UTC m=+5967.861713977" observedRunningTime="2026-01-21 16:10:22.814743832 +0000 UTC m=+5968.789092877" watchObservedRunningTime="2026-01-21 16:10:22.82012958 +0000 UTC m=+5968.794478625" Jan 21 16:10:23 crc kubenswrapper[4834]: I0121 16:10:23.816944 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" event={"ID":"f90ca8c2-ebc0-4868-8582-b897995b211d","Type":"ContainerStarted","Data":"cbc7b74b2208dafee6c7c0e2bf6e475247cdeac9db1ab1998e922644f25fc855"} Jan 21 16:10:23 crc kubenswrapper[4834]: I0121 16:10:23.848882 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" podStartSLOduration=3.816841058 podStartE2EDuration="23.848854013s" podCreationTimestamp="2026-01-21 16:10:00 +0000 UTC" firstStartedPulling="2026-01-21 16:10:02.056266347 +0000 UTC m=+5948.030615392" lastFinishedPulling="2026-01-21 16:10:22.088279302 +0000 UTC m=+5968.062628347" observedRunningTime="2026-01-21 16:10:23.838337455 +0000 UTC m=+5969.812686520" watchObservedRunningTime="2026-01-21 16:10:23.848854013 +0000 UTC m=+5969.823203058" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.226991 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.319016 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/af5fa72e-0703-40bf-9041-4b3b21899793-config-data-merged\") pod \"af5fa72e-0703-40bf-9041-4b3b21899793\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.319069 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-config-data\") pod \"af5fa72e-0703-40bf-9041-4b3b21899793\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.319115 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-scripts\") pod \"af5fa72e-0703-40bf-9041-4b3b21899793\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.319311 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-combined-ca-bundle\") pod \"af5fa72e-0703-40bf-9041-4b3b21899793\" (UID: \"af5fa72e-0703-40bf-9041-4b3b21899793\") " Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.334907 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-scripts" (OuterVolumeSpecName: "scripts") pod "af5fa72e-0703-40bf-9041-4b3b21899793" (UID: "af5fa72e-0703-40bf-9041-4b3b21899793"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.339273 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-config-data" (OuterVolumeSpecName: "config-data") pod "af5fa72e-0703-40bf-9041-4b3b21899793" (UID: "af5fa72e-0703-40bf-9041-4b3b21899793"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.355054 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af5fa72e-0703-40bf-9041-4b3b21899793-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "af5fa72e-0703-40bf-9041-4b3b21899793" (UID: "af5fa72e-0703-40bf-9041-4b3b21899793"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.358944 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af5fa72e-0703-40bf-9041-4b3b21899793" (UID: "af5fa72e-0703-40bf-9041-4b3b21899793"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.421736 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.421792 4834 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/af5fa72e-0703-40bf-9041-4b3b21899793-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.421804 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.421816 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af5fa72e-0703-40bf-9041-4b3b21899793-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.827043 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-sc5zg" Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.827037 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-sc5zg" event={"ID":"af5fa72e-0703-40bf-9041-4b3b21899793","Type":"ContainerDied","Data":"054d6e2b1c9328d780de312f4bc7eff0dcc55137a7cf54c8e1b75050d82cf231"} Jan 21 16:10:24 crc kubenswrapper[4834]: I0121 16:10:24.827106 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="054d6e2b1c9328d780de312f4bc7eff0dcc55137a7cf54c8e1b75050d82cf231" Jan 21 16:10:26 crc kubenswrapper[4834]: I0121 16:10:26.325590 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:10:26 crc kubenswrapper[4834]: E0121 16:10:26.326459 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.194877 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttvz"] Jan 21 16:10:30 crc kubenswrapper[4834]: E0121 16:10:30.196523 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5fa72e-0703-40bf-9041-4b3b21899793" containerName="init" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.196552 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5fa72e-0703-40bf-9041-4b3b21899793" containerName="init" Jan 21 16:10:30 crc kubenswrapper[4834]: E0121 16:10:30.196597 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e456ecd7-82fb-40b1-9f44-04e2484ad29f" containerName="ovn-config" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.196610 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e456ecd7-82fb-40b1-9f44-04e2484ad29f" containerName="ovn-config" Jan 21 16:10:30 crc kubenswrapper[4834]: E0121 16:10:30.196637 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5fa72e-0703-40bf-9041-4b3b21899793" containerName="octavia-db-sync" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.196649 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5fa72e-0703-40bf-9041-4b3b21899793" containerName="octavia-db-sync" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.197001 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e456ecd7-82fb-40b1-9f44-04e2484ad29f" containerName="ovn-config" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.197033 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5fa72e-0703-40bf-9041-4b3b21899793" containerName="octavia-db-sync" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.199310 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.211505 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttvz"] Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.253013 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtkbn\" (UniqueName: \"kubernetes.io/projected/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-kube-api-access-gtkbn\") pod \"redhat-marketplace-9ttvz\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.253454 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-utilities\") pod \"redhat-marketplace-9ttvz\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.256782 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-catalog-content\") pod \"redhat-marketplace-9ttvz\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.358865 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtkbn\" (UniqueName: \"kubernetes.io/projected/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-kube-api-access-gtkbn\") pod \"redhat-marketplace-9ttvz\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.358982 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-utilities\") pod \"redhat-marketplace-9ttvz\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.359067 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-catalog-content\") pod \"redhat-marketplace-9ttvz\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.359598 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-catalog-content\") pod \"redhat-marketplace-9ttvz\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.359706 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-utilities\") pod \"redhat-marketplace-9ttvz\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.379481 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtkbn\" (UniqueName: \"kubernetes.io/projected/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-kube-api-access-gtkbn\") pod \"redhat-marketplace-9ttvz\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.527608 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:30 crc kubenswrapper[4834]: I0121 16:10:30.676278 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-9gdbz" Jan 21 16:10:31 crc kubenswrapper[4834]: I0121 16:10:31.112518 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttvz"] Jan 21 16:10:31 crc kubenswrapper[4834]: I0121 16:10:31.894711 4834 generic.go:334] "Generic (PLEG): container finished" podID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerID="48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e" exitCode=0 Jan 21 16:10:31 crc kubenswrapper[4834]: I0121 16:10:31.894780 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttvz" event={"ID":"3e729b61-703a-49e3-a6e2-69a1d9c9af4b","Type":"ContainerDied","Data":"48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e"} Jan 21 16:10:31 crc kubenswrapper[4834]: I0121 16:10:31.895129 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttvz" event={"ID":"3e729b61-703a-49e3-a6e2-69a1d9c9af4b","Type":"ContainerStarted","Data":"9b5a5984c9ded9a1b417a90ff97d9dbb565c739031c4b64e7370a7a54318159c"} Jan 21 16:10:32 crc kubenswrapper[4834]: I0121 16:10:32.906213 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttvz" event={"ID":"3e729b61-703a-49e3-a6e2-69a1d9c9af4b","Type":"ContainerStarted","Data":"c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064"} Jan 21 16:10:34 crc kubenswrapper[4834]: I0121 16:10:34.926112 4834 generic.go:334] "Generic (PLEG): container finished" podID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerID="c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064" exitCode=0 Jan 21 16:10:34 crc kubenswrapper[4834]: I0121 16:10:34.926167 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttvz" event={"ID":"3e729b61-703a-49e3-a6e2-69a1d9c9af4b","Type":"ContainerDied","Data":"c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064"} Jan 21 16:10:36 crc kubenswrapper[4834]: I0121 16:10:36.951612 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttvz" event={"ID":"3e729b61-703a-49e3-a6e2-69a1d9c9af4b","Type":"ContainerStarted","Data":"d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3"} Jan 21 16:10:36 crc kubenswrapper[4834]: I0121 16:10:36.978112 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9ttvz" podStartSLOduration=3.281342806 podStartE2EDuration="6.978091648s" podCreationTimestamp="2026-01-21 16:10:30 +0000 UTC" firstStartedPulling="2026-01-21 16:10:31.902360043 +0000 UTC m=+5977.876709088" lastFinishedPulling="2026-01-21 16:10:35.599108895 +0000 UTC m=+5981.573457930" observedRunningTime="2026-01-21 16:10:36.970284904 +0000 UTC m=+5982.944633959" watchObservedRunningTime="2026-01-21 16:10:36.978091648 +0000 UTC m=+5982.952440693" Jan 21 16:10:40 crc kubenswrapper[4834]: I0121 16:10:40.325633 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:10:40 crc kubenswrapper[4834]: E0121 16:10:40.326331 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:10:40 crc kubenswrapper[4834]: I0121 16:10:40.528380 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:40 crc kubenswrapper[4834]: I0121 16:10:40.528748 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:40 crc kubenswrapper[4834]: I0121 16:10:40.588277 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:41 crc kubenswrapper[4834]: I0121 16:10:41.062946 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:41 crc kubenswrapper[4834]: I0121 16:10:41.132917 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttvz"] Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.033964 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9ttvz" podUID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerName="registry-server" containerID="cri-o://d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3" gracePeriod=2 Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.742763 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.798660 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-utilities\") pod \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.798951 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-catalog-content\") pod \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.799182 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtkbn\" (UniqueName: \"kubernetes.io/projected/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-kube-api-access-gtkbn\") pod \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\" (UID: \"3e729b61-703a-49e3-a6e2-69a1d9c9af4b\") " Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.799581 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-utilities" (OuterVolumeSpecName: "utilities") pod "3e729b61-703a-49e3-a6e2-69a1d9c9af4b" (UID: "3e729b61-703a-49e3-a6e2-69a1d9c9af4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.801095 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.805665 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-kube-api-access-gtkbn" (OuterVolumeSpecName: "kube-api-access-gtkbn") pod "3e729b61-703a-49e3-a6e2-69a1d9c9af4b" (UID: "3e729b61-703a-49e3-a6e2-69a1d9c9af4b"). InnerVolumeSpecName "kube-api-access-gtkbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.827197 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e729b61-703a-49e3-a6e2-69a1d9c9af4b" (UID: "3e729b61-703a-49e3-a6e2-69a1d9c9af4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.902460 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:43 crc kubenswrapper[4834]: I0121 16:10:43.902494 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtkbn\" (UniqueName: \"kubernetes.io/projected/3e729b61-703a-49e3-a6e2-69a1d9c9af4b-kube-api-access-gtkbn\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.048209 4834 generic.go:334] "Generic (PLEG): container finished" podID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerID="d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3" exitCode=0 Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.048273 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9ttvz" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.048293 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttvz" event={"ID":"3e729b61-703a-49e3-a6e2-69a1d9c9af4b","Type":"ContainerDied","Data":"d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3"} Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.049198 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9ttvz" event={"ID":"3e729b61-703a-49e3-a6e2-69a1d9c9af4b","Type":"ContainerDied","Data":"9b5a5984c9ded9a1b417a90ff97d9dbb565c739031c4b64e7370a7a54318159c"} Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.049224 4834 scope.go:117] "RemoveContainer" containerID="d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.083703 4834 scope.go:117] "RemoveContainer" containerID="c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.092626 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttvz"] Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.107765 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9ttvz"] Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.118400 4834 scope.go:117] "RemoveContainer" containerID="48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.163719 4834 scope.go:117] "RemoveContainer" containerID="d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3" Jan 21 16:10:44 crc kubenswrapper[4834]: E0121 16:10:44.164304 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3\": container with ID starting with d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3 not found: ID does not exist" containerID="d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.164347 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3"} err="failed to get container status \"d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3\": rpc error: code = NotFound desc = could not find container \"d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3\": container with ID starting with d87afdf840807f27813ffc51be6e1460ac924784ab90f4026e1f1f93f1ea22b3 not found: ID does not exist" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.164379 4834 scope.go:117] "RemoveContainer" containerID="c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064" Jan 21 16:10:44 crc kubenswrapper[4834]: E0121 16:10:44.164746 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064\": container with ID starting with c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064 not found: ID does not exist" containerID="c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.164876 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064"} err="failed to get container status \"c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064\": rpc error: code = NotFound desc = could not find container \"c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064\": container with ID starting with c749b12ef2eff86870dd6032ee83009500adf889a6b528bd38519edc925b8064 not found: ID does not exist" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.165018 4834 scope.go:117] "RemoveContainer" containerID="48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e" Jan 21 16:10:44 crc kubenswrapper[4834]: E0121 16:10:44.165467 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e\": container with ID starting with 48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e not found: ID does not exist" containerID="48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.165591 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e"} err="failed to get container status \"48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e\": rpc error: code = NotFound desc = could not find container \"48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e\": container with ID starting with 48668932d7b0112250150e0fe75e867d5aee3f477909b50004bf6c9fc3dc017e not found: ID does not exist" Jan 21 16:10:44 crc kubenswrapper[4834]: I0121 16:10:44.338444 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" path="/var/lib/kubelet/pods/3e729b61-703a-49e3-a6e2-69a1d9c9af4b/volumes" Jan 21 16:10:51 crc kubenswrapper[4834]: I0121 16:10:51.324280 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:10:52 crc kubenswrapper[4834]: I0121 16:10:52.122679 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"32379ce27e78b0554ab6a50c272e2419fe05a1e3480f8360ebd5c3ae33b4df8b"} Jan 21 16:10:52 crc kubenswrapper[4834]: I0121 16:10:52.803180 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-pf6wm"] Jan 21 16:10:52 crc kubenswrapper[4834]: I0121 16:10:52.803802 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" podUID="f90ca8c2-ebc0-4868-8582-b897995b211d" containerName="octavia-amphora-httpd" containerID="cri-o://cbc7b74b2208dafee6c7c0e2bf6e475247cdeac9db1ab1998e922644f25fc855" gracePeriod=30 Jan 21 16:10:53 crc kubenswrapper[4834]: I0121 16:10:53.148033 4834 generic.go:334] "Generic (PLEG): container finished" podID="f90ca8c2-ebc0-4868-8582-b897995b211d" containerID="cbc7b74b2208dafee6c7c0e2bf6e475247cdeac9db1ab1998e922644f25fc855" exitCode=0 Jan 21 16:10:53 crc kubenswrapper[4834]: I0121 16:10:53.148172 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" event={"ID":"f90ca8c2-ebc0-4868-8582-b897995b211d","Type":"ContainerDied","Data":"cbc7b74b2208dafee6c7c0e2bf6e475247cdeac9db1ab1998e922644f25fc855"} Jan 21 16:10:53 crc kubenswrapper[4834]: I0121 16:10:53.384026 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:53 crc kubenswrapper[4834]: I0121 16:10:53.510069 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f90ca8c2-ebc0-4868-8582-b897995b211d-httpd-config\") pod \"f90ca8c2-ebc0-4868-8582-b897995b211d\" (UID: \"f90ca8c2-ebc0-4868-8582-b897995b211d\") " Jan 21 16:10:53 crc kubenswrapper[4834]: I0121 16:10:53.510210 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f90ca8c2-ebc0-4868-8582-b897995b211d-amphora-image\") pod \"f90ca8c2-ebc0-4868-8582-b897995b211d\" (UID: \"f90ca8c2-ebc0-4868-8582-b897995b211d\") " Jan 21 16:10:53 crc kubenswrapper[4834]: I0121 16:10:53.545372 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f90ca8c2-ebc0-4868-8582-b897995b211d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f90ca8c2-ebc0-4868-8582-b897995b211d" (UID: "f90ca8c2-ebc0-4868-8582-b897995b211d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:53 crc kubenswrapper[4834]: I0121 16:10:53.590344 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f90ca8c2-ebc0-4868-8582-b897995b211d-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "f90ca8c2-ebc0-4868-8582-b897995b211d" (UID: "f90ca8c2-ebc0-4868-8582-b897995b211d"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:53 crc kubenswrapper[4834]: I0121 16:10:53.612612 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f90ca8c2-ebc0-4868-8582-b897995b211d-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:53 crc kubenswrapper[4834]: I0121 16:10:53.612648 4834 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f90ca8c2-ebc0-4868-8582-b897995b211d-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:54 crc kubenswrapper[4834]: I0121 16:10:54.175196 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" event={"ID":"f90ca8c2-ebc0-4868-8582-b897995b211d","Type":"ContainerDied","Data":"7bb4d7310cbc5c02f56765aeda4e49dc35ae385257083697fd4166fbb14b72ad"} Jan 21 16:10:54 crc kubenswrapper[4834]: I0121 16:10:54.175253 4834 scope.go:117] "RemoveContainer" containerID="cbc7b74b2208dafee6c7c0e2bf6e475247cdeac9db1ab1998e922644f25fc855" Jan 21 16:10:54 crc kubenswrapper[4834]: I0121 16:10:54.175306 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-pf6wm" Jan 21 16:10:54 crc kubenswrapper[4834]: I0121 16:10:54.195894 4834 scope.go:117] "RemoveContainer" containerID="252675b24a422f5c846cf96726e0eac28266d62401e48c4cc473014c11f87f2a" Jan 21 16:10:54 crc kubenswrapper[4834]: I0121 16:10:54.230311 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-pf6wm"] Jan 21 16:10:54 crc kubenswrapper[4834]: I0121 16:10:54.244083 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-pf6wm"] Jan 21 16:10:54 crc kubenswrapper[4834]: I0121 16:10:54.346374 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f90ca8c2-ebc0-4868-8582-b897995b211d" path="/var/lib/kubelet/pods/f90ca8c2-ebc0-4868-8582-b897995b211d/volumes" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.698085 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-bf9c8"] Jan 21 16:11:04 crc kubenswrapper[4834]: E0121 16:11:04.699160 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerName="extract-content" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.699179 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerName="extract-content" Jan 21 16:11:04 crc kubenswrapper[4834]: E0121 16:11:04.699198 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerName="extract-utilities" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.699207 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerName="extract-utilities" Jan 21 16:11:04 crc kubenswrapper[4834]: E0121 16:11:04.699225 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90ca8c2-ebc0-4868-8582-b897995b211d" containerName="octavia-amphora-httpd" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.699234 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90ca8c2-ebc0-4868-8582-b897995b211d" containerName="octavia-amphora-httpd" Jan 21 16:11:04 crc kubenswrapper[4834]: E0121 16:11:04.699257 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90ca8c2-ebc0-4868-8582-b897995b211d" containerName="init" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.699265 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90ca8c2-ebc0-4868-8582-b897995b211d" containerName="init" Jan 21 16:11:04 crc kubenswrapper[4834]: E0121 16:11:04.699292 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerName="registry-server" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.699300 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerName="registry-server" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.699550 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e729b61-703a-49e3-a6e2-69a1d9c9af4b" containerName="registry-server" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.699565 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f90ca8c2-ebc0-4868-8582-b897995b211d" containerName="octavia-amphora-httpd" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.700813 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.704474 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.704482 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.704782 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.708373 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-bf9c8"] Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.856255 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e34a7056-efc5-465b-b5d2-80193f49b73f-config-data-merged\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.856380 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e34a7056-efc5-465b-b5d2-80193f49b73f-hm-ports\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.856444 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-combined-ca-bundle\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.856694 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-scripts\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.856735 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-config-data\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.856832 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-amphora-certs\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.958458 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-scripts\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.958529 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-config-data\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.958579 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-amphora-certs\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.958714 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e34a7056-efc5-465b-b5d2-80193f49b73f-config-data-merged\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.958752 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e34a7056-efc5-465b-b5d2-80193f49b73f-hm-ports\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.958776 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-combined-ca-bundle\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.960885 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e34a7056-efc5-465b-b5d2-80193f49b73f-hm-ports\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.961277 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e34a7056-efc5-465b-b5d2-80193f49b73f-config-data-merged\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.966015 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-amphora-certs\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.979573 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-config-data\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.987867 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-combined-ca-bundle\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:04 crc kubenswrapper[4834]: I0121 16:11:04.988858 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34a7056-efc5-465b-b5d2-80193f49b73f-scripts\") pod \"octavia-healthmanager-bf9c8\" (UID: \"e34a7056-efc5-465b-b5d2-80193f49b73f\") " pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:05 crc kubenswrapper[4834]: I0121 16:11:05.034139 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:05 crc kubenswrapper[4834]: W0121 16:11:05.562529 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode34a7056_efc5_465b_b5d2_80193f49b73f.slice/crio-bb83030716cb797ee200d6ec37f7eb42674bb9b92aaeed7e6e05a17f8d04a882 WatchSource:0}: Error finding container bb83030716cb797ee200d6ec37f7eb42674bb9b92aaeed7e6e05a17f8d04a882: Status 404 returned error can't find the container with id bb83030716cb797ee200d6ec37f7eb42674bb9b92aaeed7e6e05a17f8d04a882 Jan 21 16:11:05 crc kubenswrapper[4834]: I0121 16:11:05.586780 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-bf9c8"] Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.288994 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-bf9c8" event={"ID":"e34a7056-efc5-465b-b5d2-80193f49b73f","Type":"ContainerStarted","Data":"f2e723be95b34a3b7a82a1e0f133e73a2cc2a9b0317c9ce57c1517cd0d6b8ec1"} Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.289369 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-bf9c8" event={"ID":"e34a7056-efc5-465b-b5d2-80193f49b73f","Type":"ContainerStarted","Data":"bb83030716cb797ee200d6ec37f7eb42674bb9b92aaeed7e6e05a17f8d04a882"} Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.578397 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-9gpf9"] Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.580840 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.583979 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.584062 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.596179 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-9gpf9"] Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.704752 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-amphora-certs\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.704849 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-scripts\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.704889 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-config-data\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.704950 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f26b39f3-1325-485e-a56a-59cd626ceb94-config-data-merged\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.704987 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-combined-ca-bundle\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.705014 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f26b39f3-1325-485e-a56a-59cd626ceb94-hm-ports\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.807031 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f26b39f3-1325-485e-a56a-59cd626ceb94-hm-ports\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.807343 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-amphora-certs\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.807561 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-scripts\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.807727 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-config-data\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.808016 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f26b39f3-1325-485e-a56a-59cd626ceb94-config-data-merged\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.808128 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f26b39f3-1325-485e-a56a-59cd626ceb94-hm-ports\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.808334 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-combined-ca-bundle\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.808521 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f26b39f3-1325-485e-a56a-59cd626ceb94-config-data-merged\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.813049 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-amphora-certs\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.813179 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-config-data\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.813520 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-combined-ca-bundle\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.814298 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26b39f3-1325-485e-a56a-59cd626ceb94-scripts\") pod \"octavia-housekeeping-9gpf9\" (UID: \"f26b39f3-1325-485e-a56a-59cd626ceb94\") " pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:06 crc kubenswrapper[4834]: I0121 16:11:06.897280 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.438881 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-9gpf9"] Jan 21 16:11:07 crc kubenswrapper[4834]: W0121 16:11:07.442708 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26b39f3_1325_485e_a56a_59cd626ceb94.slice/crio-26cc1820b08488e79719874584b89231be7a4460824c618e200d52bba7eaf7e6 WatchSource:0}: Error finding container 26cc1820b08488e79719874584b89231be7a4460824c618e200d52bba7eaf7e6: Status 404 returned error can't find the container with id 26cc1820b08488e79719874584b89231be7a4460824c618e200d52bba7eaf7e6 Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.779025 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-k8p58"] Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.780732 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-k8p58" Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.784222 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.788103 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.794759 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-k8p58"] Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.931469 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-amphora-certs\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.931558 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-scripts\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.931605 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-combined-ca-bundle\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.931630 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-config-data-merged\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.931687 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-config-data\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:07 crc kubenswrapper[4834]: I0121 16:11:07.931732 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-hm-ports\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.033393 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-amphora-certs\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.033488 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-scripts\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.033555 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-combined-ca-bundle\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.033588 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-config-data-merged\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.033644 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-config-data\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.033698 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-hm-ports\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.034538 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-config-data-merged\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.035485 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-hm-ports\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.039158 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-config-data\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.039372 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-combined-ca-bundle\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.039167 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-amphora-certs\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.040363 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/660b1ee6-aacb-4c91-a79d-f74b4ebfb640-scripts\") pod \"octavia-worker-k8p58\" (UID: \"660b1ee6-aacb-4c91-a79d-f74b4ebfb640\") " pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.100722 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-k8p58" Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.312441 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9gpf9" event={"ID":"f26b39f3-1325-485e-a56a-59cd626ceb94","Type":"ContainerStarted","Data":"26cc1820b08488e79719874584b89231be7a4460824c618e200d52bba7eaf7e6"} Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.315574 4834 generic.go:334] "Generic (PLEG): container finished" podID="e34a7056-efc5-465b-b5d2-80193f49b73f" containerID="f2e723be95b34a3b7a82a1e0f133e73a2cc2a9b0317c9ce57c1517cd0d6b8ec1" exitCode=0 Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.315630 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-bf9c8" event={"ID":"e34a7056-efc5-465b-b5d2-80193f49b73f","Type":"ContainerDied","Data":"f2e723be95b34a3b7a82a1e0f133e73a2cc2a9b0317c9ce57c1517cd0d6b8ec1"} Jan 21 16:11:08 crc kubenswrapper[4834]: I0121 16:11:08.636178 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-k8p58"] Jan 21 16:11:08 crc kubenswrapper[4834]: W0121 16:11:08.960262 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod660b1ee6_aacb_4c91_a79d_f74b4ebfb640.slice/crio-7a03f87154a9048a5fd1ece15a7eea637233fa08d116bd7dcfb1b821a739b81f WatchSource:0}: Error finding container 7a03f87154a9048a5fd1ece15a7eea637233fa08d116bd7dcfb1b821a739b81f: Status 404 returned error can't find the container with id 7a03f87154a9048a5fd1ece15a7eea637233fa08d116bd7dcfb1b821a739b81f Jan 21 16:11:09 crc kubenswrapper[4834]: I0121 16:11:09.329068 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-bf9c8" event={"ID":"e34a7056-efc5-465b-b5d2-80193f49b73f","Type":"ContainerStarted","Data":"ba6af997c9460268dac1dd664228516b1888956d04158a1be1c8f720f8389340"} Jan 21 16:11:09 crc kubenswrapper[4834]: I0121 16:11:09.329701 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:09 crc kubenswrapper[4834]: I0121 16:11:09.332960 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-k8p58" event={"ID":"660b1ee6-aacb-4c91-a79d-f74b4ebfb640","Type":"ContainerStarted","Data":"7a03f87154a9048a5fd1ece15a7eea637233fa08d116bd7dcfb1b821a739b81f"} Jan 21 16:11:09 crc kubenswrapper[4834]: I0121 16:11:09.352260 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-bf9c8" podStartSLOduration=5.352237693 podStartE2EDuration="5.352237693s" podCreationTimestamp="2026-01-21 16:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:09.350666485 +0000 UTC m=+6015.325015540" watchObservedRunningTime="2026-01-21 16:11:09.352237693 +0000 UTC m=+6015.326586748" Jan 21 16:11:10 crc kubenswrapper[4834]: I0121 16:11:10.344095 4834 generic.go:334] "Generic (PLEG): container finished" podID="f26b39f3-1325-485e-a56a-59cd626ceb94" containerID="4cc5bc679053aaede15f290375b4a789a044c2b0b69ea6d54c036b9f6f6019d7" exitCode=0 Jan 21 16:11:10 crc kubenswrapper[4834]: I0121 16:11:10.344165 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9gpf9" event={"ID":"f26b39f3-1325-485e-a56a-59cd626ceb94","Type":"ContainerDied","Data":"4cc5bc679053aaede15f290375b4a789a044c2b0b69ea6d54c036b9f6f6019d7"} Jan 21 16:11:11 crc kubenswrapper[4834]: I0121 16:11:11.356536 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9gpf9" event={"ID":"f26b39f3-1325-485e-a56a-59cd626ceb94","Type":"ContainerStarted","Data":"1c1582872c54a92f4dfa22db28a1acc0dbf089288c97cb9ec6b82ebb78209de9"} Jan 21 16:11:11 crc kubenswrapper[4834]: I0121 16:11:11.356804 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:11 crc kubenswrapper[4834]: I0121 16:11:11.358071 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-k8p58" event={"ID":"660b1ee6-aacb-4c91-a79d-f74b4ebfb640","Type":"ContainerStarted","Data":"8d93a6de6193039df9cbc4e9e0ea96182e5f0c2ef9f7c1a80a5b7f6862c9078b"} Jan 21 16:11:11 crc kubenswrapper[4834]: I0121 16:11:11.392096 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-9gpf9" podStartSLOduration=3.806948214 podStartE2EDuration="5.39207191s" podCreationTimestamp="2026-01-21 16:11:06 +0000 UTC" firstStartedPulling="2026-01-21 16:11:07.445520752 +0000 UTC m=+6013.419869797" lastFinishedPulling="2026-01-21 16:11:09.030644448 +0000 UTC m=+6015.004993493" observedRunningTime="2026-01-21 16:11:11.38311942 +0000 UTC m=+6017.357468465" watchObservedRunningTime="2026-01-21 16:11:11.39207191 +0000 UTC m=+6017.366420955" Jan 21 16:11:12 crc kubenswrapper[4834]: I0121 16:11:12.373033 4834 generic.go:334] "Generic (PLEG): container finished" podID="660b1ee6-aacb-4c91-a79d-f74b4ebfb640" containerID="8d93a6de6193039df9cbc4e9e0ea96182e5f0c2ef9f7c1a80a5b7f6862c9078b" exitCode=0 Jan 21 16:11:12 crc kubenswrapper[4834]: I0121 16:11:12.373092 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-k8p58" event={"ID":"660b1ee6-aacb-4c91-a79d-f74b4ebfb640","Type":"ContainerDied","Data":"8d93a6de6193039df9cbc4e9e0ea96182e5f0c2ef9f7c1a80a5b7f6862c9078b"} Jan 21 16:11:13 crc kubenswrapper[4834]: I0121 16:11:13.384328 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-k8p58" event={"ID":"660b1ee6-aacb-4c91-a79d-f74b4ebfb640","Type":"ContainerStarted","Data":"987ed9b10265c74babf35a4f44364dc05c96e194cfa9bb1f57728e25f8ab6adb"} Jan 21 16:11:13 crc kubenswrapper[4834]: I0121 16:11:13.384944 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-k8p58" Jan 21 16:11:13 crc kubenswrapper[4834]: I0121 16:11:13.410535 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-k8p58" podStartSLOduration=5.027110697 podStartE2EDuration="6.410514488s" podCreationTimestamp="2026-01-21 16:11:07 +0000 UTC" firstStartedPulling="2026-01-21 16:11:08.963349178 +0000 UTC m=+6014.937698223" lastFinishedPulling="2026-01-21 16:11:10.346752969 +0000 UTC m=+6016.321102014" observedRunningTime="2026-01-21 16:11:13.403409956 +0000 UTC m=+6019.377759011" watchObservedRunningTime="2026-01-21 16:11:13.410514488 +0000 UTC m=+6019.384863533" Jan 21 16:11:20 crc kubenswrapper[4834]: I0121 16:11:20.086089 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-bf9c8" Jan 21 16:11:21 crc kubenswrapper[4834]: I0121 16:11:21.931653 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-9gpf9" Jan 21 16:11:23 crc kubenswrapper[4834]: I0121 16:11:23.130214 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-k8p58" Jan 21 16:11:49 crc kubenswrapper[4834]: I0121 16:11:49.055453 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hkjr6"] Jan 21 16:11:49 crc kubenswrapper[4834]: I0121 16:11:49.064719 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-497c-account-create-update-ll5ml"] Jan 21 16:11:49 crc kubenswrapper[4834]: I0121 16:11:49.072946 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-497c-account-create-update-ll5ml"] Jan 21 16:11:49 crc kubenswrapper[4834]: I0121 16:11:49.081720 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hkjr6"] Jan 21 16:11:50 crc kubenswrapper[4834]: I0121 16:11:50.338898 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af90d09b-da72-4ca5-bc92-3909ba5ac898" path="/var/lib/kubelet/pods/af90d09b-da72-4ca5-bc92-3909ba5ac898/volumes" Jan 21 16:11:50 crc kubenswrapper[4834]: I0121 16:11:50.340170 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e713393e-fda6-487a-988d-971d2c270a65" path="/var/lib/kubelet/pods/e713393e-fda6-487a-988d-971d2c270a65/volumes" Jan 21 16:11:58 crc kubenswrapper[4834]: I0121 16:11:58.027773 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wvv9k"] Jan 21 16:11:58 crc kubenswrapper[4834]: I0121 16:11:58.036511 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wvv9k"] Jan 21 16:11:58 crc kubenswrapper[4834]: I0121 16:11:58.338986 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18cc8860-1899-4cd2-af4f-25a4ea9ef189" path="/var/lib/kubelet/pods/18cc8860-1899-4cd2-af4f-25a4ea9ef189/volumes" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.381296 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-756657db5-vkplz"] Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.392518 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.395169 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-756657db5-vkplz"] Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.399438 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.399698 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jrgf6" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.399725 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.405542 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.482200 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.482486 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerName="glance-log" containerID="cri-o://d886106ede6b5724900dfa95fce683b870c2ef13944ad83a7b7c53ba355edeff" gracePeriod=30 Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.483061 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerName="glance-httpd" containerID="cri-o://0d064c168b9b89f729fda3eb70534de975c4fe51f00a37e3550137fc504ccdf7" gracePeriod=30 Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.554772 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.555312 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-log" containerID="cri-o://5aa6aa3e232c7fc2eabf7cd295e61ee64b75142ca5801d38b906eee1ce90e604" gracePeriod=30 Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.555483 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-httpd" containerID="cri-o://d2ee4ef26d47f2a6e9417441cc5f03cdf22d68c010e0a68999890d1384dd9f05" gracePeriod=30 Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.558266 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vxtc\" (UniqueName: \"kubernetes.io/projected/756c8c5a-5683-472c-9717-d3b2ddd3efc3-kube-api-access-7vxtc\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.558392 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-config-data\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.558422 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-scripts\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.558505 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756c8c5a-5683-472c-9717-d3b2ddd3efc3-logs\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.558571 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/756c8c5a-5683-472c-9717-d3b2ddd3efc3-horizon-secret-key\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.589058 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5865df457c-plm9x"] Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.591429 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.622337 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5865df457c-plm9x"] Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.662800 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vxtc\" (UniqueName: \"kubernetes.io/projected/756c8c5a-5683-472c-9717-d3b2ddd3efc3-kube-api-access-7vxtc\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.664721 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-config-data\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.674321 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-scripts\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.675019 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756c8c5a-5683-472c-9717-d3b2ddd3efc3-logs\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.676165 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/756c8c5a-5683-472c-9717-d3b2ddd3efc3-horizon-secret-key\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.675836 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756c8c5a-5683-472c-9717-d3b2ddd3efc3-logs\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.676055 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-config-data\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.675629 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-scripts\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.685379 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vxtc\" (UniqueName: \"kubernetes.io/projected/756c8c5a-5683-472c-9717-d3b2ddd3efc3-kube-api-access-7vxtc\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.688229 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/756c8c5a-5683-472c-9717-d3b2ddd3efc3-horizon-secret-key\") pod \"horizon-756657db5-vkplz\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.726261 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.778686 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-scripts\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.780862 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-horizon-secret-key\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.781701 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-logs\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.782052 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggkhh\" (UniqueName: \"kubernetes.io/projected/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-kube-api-access-ggkhh\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.782352 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-config-data\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.874333 4834 generic.go:334] "Generic (PLEG): container finished" podID="3cde9223-344d-47b3-afc8-295962641499" containerID="5aa6aa3e232c7fc2eabf7cd295e61ee64b75142ca5801d38b906eee1ce90e604" exitCode=143 Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.874407 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3cde9223-344d-47b3-afc8-295962641499","Type":"ContainerDied","Data":"5aa6aa3e232c7fc2eabf7cd295e61ee64b75142ca5801d38b906eee1ce90e604"} Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.884033 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-scripts\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.884078 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-horizon-secret-key\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.884101 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-logs\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.884174 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggkhh\" (UniqueName: \"kubernetes.io/projected/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-kube-api-access-ggkhh\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.884355 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-config-data\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.885695 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-config-data\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.886151 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-scripts\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.887442 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-logs\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.891467 4834 generic.go:334] "Generic (PLEG): container finished" podID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerID="d886106ede6b5724900dfa95fce683b870c2ef13944ad83a7b7c53ba355edeff" exitCode=143 Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.891520 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31bd15e2-d0e6-481c-af91-7959a5ea55b5","Type":"ContainerDied","Data":"d886106ede6b5724900dfa95fce683b870c2ef13944ad83a7b7c53ba355edeff"} Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.915082 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggkhh\" (UniqueName: \"kubernetes.io/projected/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-kube-api-access-ggkhh\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:06 crc kubenswrapper[4834]: I0121 16:12:06.922673 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-horizon-secret-key\") pod \"horizon-5865df457c-plm9x\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.106438 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5865df457c-plm9x"] Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.110370 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.138013 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-ff5d6ccd9-qr9p9"] Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.144793 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.162183 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ff5d6ccd9-qr9p9"] Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.194535 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5w6\" (UniqueName: \"kubernetes.io/projected/948fc85f-d13f-4ae9-a878-27b64972bfb1-kube-api-access-hj5w6\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.194625 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-scripts\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.194680 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/948fc85f-d13f-4ae9-a878-27b64972bfb1-horizon-secret-key\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.194838 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948fc85f-d13f-4ae9-a878-27b64972bfb1-logs\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.194960 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-config-data\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.229270 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-756657db5-vkplz"] Jan 21 16:12:07 crc kubenswrapper[4834]: W0121 16:12:07.235168 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod756c8c5a_5683_472c_9717_d3b2ddd3efc3.slice/crio-cc7b8a3b789c2bebe0a579d6c386d72a0d7773faee4b8f42588c17aa3d49cfb3 WatchSource:0}: Error finding container cc7b8a3b789c2bebe0a579d6c386d72a0d7773faee4b8f42588c17aa3d49cfb3: Status 404 returned error can't find the container with id cc7b8a3b789c2bebe0a579d6c386d72a0d7773faee4b8f42588c17aa3d49cfb3 Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.296738 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948fc85f-d13f-4ae9-a878-27b64972bfb1-logs\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.296859 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-config-data\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.296909 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5w6\" (UniqueName: \"kubernetes.io/projected/948fc85f-d13f-4ae9-a878-27b64972bfb1-kube-api-access-hj5w6\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.297067 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-scripts\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.297114 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/948fc85f-d13f-4ae9-a878-27b64972bfb1-horizon-secret-key\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.297771 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948fc85f-d13f-4ae9-a878-27b64972bfb1-logs\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.298319 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-scripts\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.301255 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-config-data\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.303329 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/948fc85f-d13f-4ae9-a878-27b64972bfb1-horizon-secret-key\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.313149 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5w6\" (UniqueName: \"kubernetes.io/projected/948fc85f-d13f-4ae9-a878-27b64972bfb1-kube-api-access-hj5w6\") pod \"horizon-ff5d6ccd9-qr9p9\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.471453 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.695565 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5865df457c-plm9x"] Jan 21 16:12:07 crc kubenswrapper[4834]: W0121 16:12:07.705896 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod830bbb17_8ddd_4e9a_9bce_0ed4aa1e626a.slice/crio-118786a4c90a656ad549eeb196b769f743a0ef686849b70c26af5464d4e9aa35 WatchSource:0}: Error finding container 118786a4c90a656ad549eeb196b769f743a0ef686849b70c26af5464d4e9aa35: Status 404 returned error can't find the container with id 118786a4c90a656ad549eeb196b769f743a0ef686849b70c26af5464d4e9aa35 Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.920574 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-756657db5-vkplz" event={"ID":"756c8c5a-5683-472c-9717-d3b2ddd3efc3","Type":"ContainerStarted","Data":"cc7b8a3b789c2bebe0a579d6c386d72a0d7773faee4b8f42588c17aa3d49cfb3"} Jan 21 16:12:07 crc kubenswrapper[4834]: I0121 16:12:07.922595 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5865df457c-plm9x" event={"ID":"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a","Type":"ContainerStarted","Data":"118786a4c90a656ad549eeb196b769f743a0ef686849b70c26af5464d4e9aa35"} Jan 21 16:12:08 crc kubenswrapper[4834]: I0121 16:12:08.165333 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ff5d6ccd9-qr9p9"] Jan 21 16:12:08 crc kubenswrapper[4834]: I0121 16:12:08.935979 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff5d6ccd9-qr9p9" event={"ID":"948fc85f-d13f-4ae9-a878-27b64972bfb1","Type":"ContainerStarted","Data":"38013e77835596c119f81e2ab12b7cf8cb511efa4582fa34a0450d908dc60b53"} Jan 21 16:12:09 crc kubenswrapper[4834]: I0121 16:12:09.272362 4834 scope.go:117] "RemoveContainer" containerID="36408a5008327e613c172e19e71cb77247a75649a61bd4116938644799b4f560" Jan 21 16:12:09 crc kubenswrapper[4834]: I0121 16:12:09.332098 4834 scope.go:117] "RemoveContainer" containerID="296c4478ec54693a3f22f64a6ffeeab04017ad37b38c60175087f62f3f82da76" Jan 21 16:12:09 crc kubenswrapper[4834]: I0121 16:12:09.373091 4834 scope.go:117] "RemoveContainer" containerID="1d57fbeeb6983efb82a4a8767e639471c58f74be4c1cdf65c0e9c2c4b06a71f8" Jan 21 16:12:09 crc kubenswrapper[4834]: I0121 16:12:09.944612 4834 generic.go:334] "Generic (PLEG): container finished" podID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerID="0d064c168b9b89f729fda3eb70534de975c4fe51f00a37e3550137fc504ccdf7" exitCode=0 Jan 21 16:12:09 crc kubenswrapper[4834]: I0121 16:12:09.944959 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31bd15e2-d0e6-481c-af91-7959a5ea55b5","Type":"ContainerDied","Data":"0d064c168b9b89f729fda3eb70534de975c4fe51f00a37e3550137fc504ccdf7"} Jan 21 16:12:09 crc kubenswrapper[4834]: I0121 16:12:09.979579 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.1.42:9292/healthcheck\": read tcp 10.217.0.2:50202->10.217.1.42:9292: read: connection reset by peer" Jan 21 16:12:09 crc kubenswrapper[4834]: I0121 16:12:09.980032 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.1.42:9292/healthcheck\": read tcp 10.217.0.2:50204->10.217.1.42:9292: read: connection reset by peer" Jan 21 16:12:10 crc kubenswrapper[4834]: I0121 16:12:10.955949 4834 generic.go:334] "Generic (PLEG): container finished" podID="3cde9223-344d-47b3-afc8-295962641499" containerID="d2ee4ef26d47f2a6e9417441cc5f03cdf22d68c010e0a68999890d1384dd9f05" exitCode=0 Jan 21 16:12:10 crc kubenswrapper[4834]: I0121 16:12:10.956061 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3cde9223-344d-47b3-afc8-295962641499","Type":"ContainerDied","Data":"d2ee4ef26d47f2a6e9417441cc5f03cdf22d68c010e0a68999890d1384dd9f05"} Jan 21 16:12:14 crc kubenswrapper[4834]: I0121 16:12:14.945658 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.000185 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"31bd15e2-d0e6-481c-af91-7959a5ea55b5","Type":"ContainerDied","Data":"e6abacc9f26bbb09f98f6a7d77fddfc240ab39477ab6190542bd7b4f065c9bbd"} Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.000238 4834 scope.go:117] "RemoveContainer" containerID="0d064c168b9b89f729fda3eb70534de975c4fe51f00a37e3550137fc504ccdf7" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.000367 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.037160 4834 scope.go:117] "RemoveContainer" containerID="d886106ede6b5724900dfa95fce683b870c2ef13944ad83a7b7c53ba355edeff" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.098198 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp9s4\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-kube-api-access-rp9s4\") pod \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.098578 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-combined-ca-bundle\") pod \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.098801 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-scripts\") pod \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.098882 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-logs\") pod \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.098961 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-httpd-run\") pod \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.098993 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-ceph\") pod \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.099036 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-config-data\") pod \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\" (UID: \"31bd15e2-d0e6-481c-af91-7959a5ea55b5\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.102659 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "31bd15e2-d0e6-481c-af91-7959a5ea55b5" (UID: "31bd15e2-d0e6-481c-af91-7959a5ea55b5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.103074 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-logs" (OuterVolumeSpecName: "logs") pod "31bd15e2-d0e6-481c-af91-7959a5ea55b5" (UID: "31bd15e2-d0e6-481c-af91-7959a5ea55b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.110324 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-scripts" (OuterVolumeSpecName: "scripts") pod "31bd15e2-d0e6-481c-af91-7959a5ea55b5" (UID: "31bd15e2-d0e6-481c-af91-7959a5ea55b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.113227 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-kube-api-access-rp9s4" (OuterVolumeSpecName: "kube-api-access-rp9s4") pod "31bd15e2-d0e6-481c-af91-7959a5ea55b5" (UID: "31bd15e2-d0e6-481c-af91-7959a5ea55b5"). InnerVolumeSpecName "kube-api-access-rp9s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.114412 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-ceph" (OuterVolumeSpecName: "ceph") pod "31bd15e2-d0e6-481c-af91-7959a5ea55b5" (UID: "31bd15e2-d0e6-481c-af91-7959a5ea55b5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.149964 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31bd15e2-d0e6-481c-af91-7959a5ea55b5" (UID: "31bd15e2-d0e6-481c-af91-7959a5ea55b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.177446 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-config-data" (OuterVolumeSpecName: "config-data") pod "31bd15e2-d0e6-481c-af91-7959a5ea55b5" (UID: "31bd15e2-d0e6-481c-af91-7959a5ea55b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.201295 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp9s4\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-kube-api-access-rp9s4\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.201331 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.201342 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.201356 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.201366 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31bd15e2-d0e6-481c-af91-7959a5ea55b5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.201379 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/31bd15e2-d0e6-481c-af91-7959a5ea55b5-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.201389 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31bd15e2-d0e6-481c-af91-7959a5ea55b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.356648 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.359481 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.375534 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.422114 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:12:15 crc kubenswrapper[4834]: E0121 16:12:15.422668 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-httpd" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.422683 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-httpd" Jan 21 16:12:15 crc kubenswrapper[4834]: E0121 16:12:15.422721 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerName="glance-httpd" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.422727 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerName="glance-httpd" Jan 21 16:12:15 crc kubenswrapper[4834]: E0121 16:12:15.422744 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-log" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.422750 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-log" Jan 21 16:12:15 crc kubenswrapper[4834]: E0121 16:12:15.422759 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerName="glance-log" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.422765 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerName="glance-log" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.423043 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerName="glance-log" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.423064 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-log" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.423110 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" containerName="glance-httpd" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.423126 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cde9223-344d-47b3-afc8-295962641499" containerName="glance-httpd" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.424899 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.428615 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.475437 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.521260 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-ceph\") pod \"3cde9223-344d-47b3-afc8-295962641499\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.521338 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-combined-ca-bundle\") pod \"3cde9223-344d-47b3-afc8-295962641499\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.521409 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhkdz\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-kube-api-access-vhkdz\") pod \"3cde9223-344d-47b3-afc8-295962641499\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.521496 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-httpd-run\") pod \"3cde9223-344d-47b3-afc8-295962641499\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.522176 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-config-data\") pod \"3cde9223-344d-47b3-afc8-295962641499\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.522261 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-scripts\") pod \"3cde9223-344d-47b3-afc8-295962641499\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.522285 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-logs\") pod \"3cde9223-344d-47b3-afc8-295962641499\" (UID: \"3cde9223-344d-47b3-afc8-295962641499\") " Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.522407 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49stv\" (UniqueName: \"kubernetes.io/projected/187f9c99-b482-4a74-9bef-7017e691f1e2-kube-api-access-49stv\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.522469 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/187f9c99-b482-4a74-9bef-7017e691f1e2-ceph\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.522502 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/187f9c99-b482-4a74-9bef-7017e691f1e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.523581 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/187f9c99-b482-4a74-9bef-7017e691f1e2-logs\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.530891 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/187f9c99-b482-4a74-9bef-7017e691f1e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.531395 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/187f9c99-b482-4a74-9bef-7017e691f1e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.531584 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/187f9c99-b482-4a74-9bef-7017e691f1e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.531658 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-logs" (OuterVolumeSpecName: "logs") pod "3cde9223-344d-47b3-afc8-295962641499" (UID: "3cde9223-344d-47b3-afc8-295962641499"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.532016 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.558801 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-scripts" (OuterVolumeSpecName: "scripts") pod "3cde9223-344d-47b3-afc8-295962641499" (UID: "3cde9223-344d-47b3-afc8-295962641499"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.560186 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3cde9223-344d-47b3-afc8-295962641499" (UID: "3cde9223-344d-47b3-afc8-295962641499"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.566110 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-kube-api-access-vhkdz" (OuterVolumeSpecName: "kube-api-access-vhkdz") pod "3cde9223-344d-47b3-afc8-295962641499" (UID: "3cde9223-344d-47b3-afc8-295962641499"). InnerVolumeSpecName "kube-api-access-vhkdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.569703 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-ceph" (OuterVolumeSpecName: "ceph") pod "3cde9223-344d-47b3-afc8-295962641499" (UID: "3cde9223-344d-47b3-afc8-295962641499"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.628252 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cde9223-344d-47b3-afc8-295962641499" (UID: "3cde9223-344d-47b3-afc8-295962641499"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635301 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/187f9c99-b482-4a74-9bef-7017e691f1e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635450 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49stv\" (UniqueName: \"kubernetes.io/projected/187f9c99-b482-4a74-9bef-7017e691f1e2-kube-api-access-49stv\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635535 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/187f9c99-b482-4a74-9bef-7017e691f1e2-ceph\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635581 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/187f9c99-b482-4a74-9bef-7017e691f1e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635633 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/187f9c99-b482-4a74-9bef-7017e691f1e2-logs\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635662 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/187f9c99-b482-4a74-9bef-7017e691f1e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635775 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/187f9c99-b482-4a74-9bef-7017e691f1e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635892 4834 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3cde9223-344d-47b3-afc8-295962641499-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635915 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635946 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635961 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.635975 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhkdz\" (UniqueName: \"kubernetes.io/projected/3cde9223-344d-47b3-afc8-295962641499-kube-api-access-vhkdz\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.638177 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/187f9c99-b482-4a74-9bef-7017e691f1e2-logs\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.639005 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/187f9c99-b482-4a74-9bef-7017e691f1e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.643030 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/187f9c99-b482-4a74-9bef-7017e691f1e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.644456 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/187f9c99-b482-4a74-9bef-7017e691f1e2-ceph\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.644911 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/187f9c99-b482-4a74-9bef-7017e691f1e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.646610 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/187f9c99-b482-4a74-9bef-7017e691f1e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.662638 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-config-data" (OuterVolumeSpecName: "config-data") pod "3cde9223-344d-47b3-afc8-295962641499" (UID: "3cde9223-344d-47b3-afc8-295962641499"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.662685 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49stv\" (UniqueName: \"kubernetes.io/projected/187f9c99-b482-4a74-9bef-7017e691f1e2-kube-api-access-49stv\") pod \"glance-default-external-api-0\" (UID: \"187f9c99-b482-4a74-9bef-7017e691f1e2\") " pod="openstack/glance-default-external-api-0" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.738971 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde9223-344d-47b3-afc8-295962641499-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:15 crc kubenswrapper[4834]: I0121 16:12:15.805802 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.032622 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3cde9223-344d-47b3-afc8-295962641499","Type":"ContainerDied","Data":"0b45d55b4e2584b3f08897e9db3570b006542d3736e121c39e8cdfdcf03327d0"} Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.032687 4834 scope.go:117] "RemoveContainer" containerID="d2ee4ef26d47f2a6e9417441cc5f03cdf22d68c010e0a68999890d1384dd9f05" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.032854 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.044046 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5865df457c-plm9x" event={"ID":"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a","Type":"ContainerStarted","Data":"ca5c3f15cee86c56029b8a65de022dfec1c6d903bf126fa9ad348a526be4fbbe"} Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.044092 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5865df457c-plm9x" event={"ID":"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a","Type":"ContainerStarted","Data":"f334e13796776a405d7b41c1c64bc7237ef3672440bbb3a1de1da93d69a8713b"} Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.044201 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5865df457c-plm9x" podUID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerName="horizon-log" containerID="cri-o://f334e13796776a405d7b41c1c64bc7237ef3672440bbb3a1de1da93d69a8713b" gracePeriod=30 Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.044288 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5865df457c-plm9x" podUID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerName="horizon" containerID="cri-o://ca5c3f15cee86c56029b8a65de022dfec1c6d903bf126fa9ad348a526be4fbbe" gracePeriod=30 Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.053577 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff5d6ccd9-qr9p9" event={"ID":"948fc85f-d13f-4ae9-a878-27b64972bfb1","Type":"ContainerStarted","Data":"a577e9b48de6ebaf0bbb379aa8e3624a31b81edf08ba7fd55d7e4a9be80bd44d"} Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.053624 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff5d6ccd9-qr9p9" event={"ID":"948fc85f-d13f-4ae9-a878-27b64972bfb1","Type":"ContainerStarted","Data":"ae7bff0d0fa14cf3cca0744de0e2382ae7fa07f5df583e540b4a0e00c0015abf"} Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.059284 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-756657db5-vkplz" event={"ID":"756c8c5a-5683-472c-9717-d3b2ddd3efc3","Type":"ContainerStarted","Data":"01917efb5938ab27671e453452ea40599ed205712915bae1c0b6db63d8a46014"} Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.059387 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-756657db5-vkplz" event={"ID":"756c8c5a-5683-472c-9717-d3b2ddd3efc3","Type":"ContainerStarted","Data":"f717a62a5a73125c73be144d418ee5bdc91381ab1b16df06a342385bd203a91f"} Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.071747 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5865df457c-plm9x" podStartSLOduration=2.787782927 podStartE2EDuration="10.071726161s" podCreationTimestamp="2026-01-21 16:12:06 +0000 UTC" firstStartedPulling="2026-01-21 16:12:07.710197169 +0000 UTC m=+6073.684546214" lastFinishedPulling="2026-01-21 16:12:14.994140403 +0000 UTC m=+6080.968489448" observedRunningTime="2026-01-21 16:12:16.066608411 +0000 UTC m=+6082.040957476" watchObservedRunningTime="2026-01-21 16:12:16.071726161 +0000 UTC m=+6082.046075206" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.089272 4834 scope.go:117] "RemoveContainer" containerID="5aa6aa3e232c7fc2eabf7cd295e61ee64b75142ca5801d38b906eee1ce90e604" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.124456 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-756657db5-vkplz" podStartSLOduration=2.413201467 podStartE2EDuration="10.124431946s" podCreationTimestamp="2026-01-21 16:12:06 +0000 UTC" firstStartedPulling="2026-01-21 16:12:07.241366118 +0000 UTC m=+6073.215715163" lastFinishedPulling="2026-01-21 16:12:14.952596597 +0000 UTC m=+6080.926945642" observedRunningTime="2026-01-21 16:12:16.082003112 +0000 UTC m=+6082.056352167" watchObservedRunningTime="2026-01-21 16:12:16.124431946 +0000 UTC m=+6082.098781001" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.142191 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-ff5d6ccd9-qr9p9" podStartSLOduration=2.354176581 podStartE2EDuration="9.142172049s" podCreationTimestamp="2026-01-21 16:12:07 +0000 UTC" firstStartedPulling="2026-01-21 16:12:08.174637272 +0000 UTC m=+6074.148986327" lastFinishedPulling="2026-01-21 16:12:14.96263275 +0000 UTC m=+6080.936981795" observedRunningTime="2026-01-21 16:12:16.104101752 +0000 UTC m=+6082.078450817" watchObservedRunningTime="2026-01-21 16:12:16.142172049 +0000 UTC m=+6082.116521094" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.169816 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.184434 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.192760 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.194588 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.199360 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.207560 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.248849 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-logs\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.248891 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zjq\" (UniqueName: \"kubernetes.io/projected/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-kube-api-access-x9zjq\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.248974 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.248993 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.249008 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.249088 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.249162 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.338504 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31bd15e2-d0e6-481c-af91-7959a5ea55b5" path="/var/lib/kubelet/pods/31bd15e2-d0e6-481c-af91-7959a5ea55b5/volumes" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.340141 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cde9223-344d-47b3-afc8-295962641499" path="/var/lib/kubelet/pods/3cde9223-344d-47b3-afc8-295962641499/volumes" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.350864 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.350919 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.350971 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.351083 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.351219 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.351344 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-logs\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.351386 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9zjq\" (UniqueName: \"kubernetes.io/projected/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-kube-api-access-x9zjq\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.352336 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-logs\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.352683 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.356879 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.357254 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.365118 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.370200 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.373036 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9zjq\" (UniqueName: \"kubernetes.io/projected/debc9a9a-02de-46bd-ad18-a8c5527e7bc2-kube-api-access-x9zjq\") pod \"glance-default-internal-api-0\" (UID: \"debc9a9a-02de-46bd-ad18-a8c5527e7bc2\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: W0121 16:12:16.429654 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod187f9c99_b482_4a74_9bef_7017e691f1e2.slice/crio-508160b2725510a8ff6774d6aea5ef55403bd7395c7e0c58426359924f5aa889 WatchSource:0}: Error finding container 508160b2725510a8ff6774d6aea5ef55403bd7395c7e0c58426359924f5aa889: Status 404 returned error can't find the container with id 508160b2725510a8ff6774d6aea5ef55403bd7395c7e0c58426359924f5aa889 Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.436336 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.530008 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.726816 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:16 crc kubenswrapper[4834]: I0121 16:12:16.726896 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:17 crc kubenswrapper[4834]: I0121 16:12:17.074354 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"187f9c99-b482-4a74-9bef-7017e691f1e2","Type":"ContainerStarted","Data":"508160b2725510a8ff6774d6aea5ef55403bd7395c7e0c58426359924f5aa889"} Jan 21 16:12:17 crc kubenswrapper[4834]: I0121 16:12:17.112552 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:17 crc kubenswrapper[4834]: I0121 16:12:17.333780 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:12:17 crc kubenswrapper[4834]: I0121 16:12:17.471703 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:17 crc kubenswrapper[4834]: I0121 16:12:17.471778 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:18 crc kubenswrapper[4834]: I0121 16:12:18.088734 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"debc9a9a-02de-46bd-ad18-a8c5527e7bc2","Type":"ContainerStarted","Data":"8344039ffc103ea72c0ae39b67361d263eaefd9eaf2550afd6d7b4e894283efc"} Jan 21 16:12:18 crc kubenswrapper[4834]: I0121 16:12:18.092509 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"187f9c99-b482-4a74-9bef-7017e691f1e2","Type":"ContainerStarted","Data":"ef98f48fe056474d7a44e728c73affb4f4b434c5ac08242fd78fe5176e441412"} Jan 21 16:12:19 crc kubenswrapper[4834]: I0121 16:12:19.104002 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"debc9a9a-02de-46bd-ad18-a8c5527e7bc2","Type":"ContainerStarted","Data":"e65260ded973b151be276f3a86ed45c92bbef5818e302d2d10b1ce6deca48579"} Jan 21 16:12:19 crc kubenswrapper[4834]: I0121 16:12:19.104349 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"debc9a9a-02de-46bd-ad18-a8c5527e7bc2","Type":"ContainerStarted","Data":"c4065f2fe0ca43ad73b68eb60b98d2052c5169cb9da1419c6c96c21df574734a"} Jan 21 16:12:19 crc kubenswrapper[4834]: I0121 16:12:19.110898 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"187f9c99-b482-4a74-9bef-7017e691f1e2","Type":"ContainerStarted","Data":"a4b5043128d83a24a13f571a451600367518a7ad4fecd531126fc86eb90ae122"} Jan 21 16:12:19 crc kubenswrapper[4834]: I0121 16:12:19.141205 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.141180818 podStartE2EDuration="3.141180818s" podCreationTimestamp="2026-01-21 16:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:19.121902266 +0000 UTC m=+6085.096251331" watchObservedRunningTime="2026-01-21 16:12:19.141180818 +0000 UTC m=+6085.115529873" Jan 21 16:12:19 crc kubenswrapper[4834]: I0121 16:12:19.163010 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.162985088 podStartE2EDuration="4.162985088s" podCreationTimestamp="2026-01-21 16:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:12:19.150068565 +0000 UTC m=+6085.124417630" watchObservedRunningTime="2026-01-21 16:12:19.162985088 +0000 UTC m=+6085.137334133" Jan 21 16:12:25 crc kubenswrapper[4834]: I0121 16:12:25.045405 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dxndl"] Jan 21 16:12:25 crc kubenswrapper[4834]: I0121 16:12:25.053699 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dxndl"] Jan 21 16:12:25 crc kubenswrapper[4834]: I0121 16:12:25.805863 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:12:25 crc kubenswrapper[4834]: I0121 16:12:25.806216 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:12:25 crc kubenswrapper[4834]: I0121 16:12:25.852448 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:12:25 crc kubenswrapper[4834]: I0121 16:12:25.859602 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.030937 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3667-account-create-update-64c8h"] Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.042811 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3667-account-create-update-64c8h"] Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.251619 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.251692 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.340791 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53db6835-81f8-4c27-878f-1bee854eaead" path="/var/lib/kubelet/pods/53db6835-81f8-4c27-878f-1bee854eaead/volumes" Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.342627 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fecabc-f24c-483d-85c8-d0069e888c53" path="/var/lib/kubelet/pods/95fecabc-f24c-483d-85c8-d0069e888c53/volumes" Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.530975 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.532759 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.576397 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.595589 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:26 crc kubenswrapper[4834]: I0121 16:12:26.729979 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-756657db5-vkplz" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 21 16:12:27 crc kubenswrapper[4834]: I0121 16:12:27.271130 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:27 crc kubenswrapper[4834]: I0121 16:12:27.271504 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:27 crc kubenswrapper[4834]: I0121 16:12:27.473178 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ff5d6ccd9-qr9p9" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 21 16:12:28 crc kubenswrapper[4834]: I0121 16:12:28.710346 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:12:28 crc kubenswrapper[4834]: I0121 16:12:28.710510 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:12:28 crc kubenswrapper[4834]: I0121 16:12:28.716895 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:12:29 crc kubenswrapper[4834]: I0121 16:12:29.803401 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:29 crc kubenswrapper[4834]: I0121 16:12:29.803881 4834 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:12:29 crc kubenswrapper[4834]: I0121 16:12:29.813912 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:12:35 crc kubenswrapper[4834]: I0121 16:12:35.037493 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-b7qq4"] Jan 21 16:12:35 crc kubenswrapper[4834]: I0121 16:12:35.052604 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-b7qq4"] Jan 21 16:12:36 crc kubenswrapper[4834]: I0121 16:12:36.335624 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f3e6a2-443f-4851-93a7-835f9fa507ed" path="/var/lib/kubelet/pods/57f3e6a2-443f-4851-93a7-835f9fa507ed/volumes" Jan 21 16:12:38 crc kubenswrapper[4834]: I0121 16:12:38.950831 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:39 crc kubenswrapper[4834]: I0121 16:12:39.293800 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:40 crc kubenswrapper[4834]: I0121 16:12:40.881097 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-756657db5-vkplz" Jan 21 16:12:41 crc kubenswrapper[4834]: I0121 16:12:41.137425 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:12:41 crc kubenswrapper[4834]: I0121 16:12:41.201207 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-756657db5-vkplz"] Jan 21 16:12:41 crc kubenswrapper[4834]: I0121 16:12:41.422103 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-756657db5-vkplz" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon-log" containerID="cri-o://f717a62a5a73125c73be144d418ee5bdc91381ab1b16df06a342385bd203a91f" gracePeriod=30 Jan 21 16:12:41 crc kubenswrapper[4834]: I0121 16:12:41.422161 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-756657db5-vkplz" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon" containerID="cri-o://01917efb5938ab27671e453452ea40599ed205712915bae1c0b6db63d8a46014" gracePeriod=30 Jan 21 16:12:45 crc kubenswrapper[4834]: I0121 16:12:45.466708 4834 generic.go:334] "Generic (PLEG): container finished" podID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerID="01917efb5938ab27671e453452ea40599ed205712915bae1c0b6db63d8a46014" exitCode=0 Jan 21 16:12:45 crc kubenswrapper[4834]: I0121 16:12:45.466807 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-756657db5-vkplz" event={"ID":"756c8c5a-5683-472c-9717-d3b2ddd3efc3","Type":"ContainerDied","Data":"01917efb5938ab27671e453452ea40599ed205712915bae1c0b6db63d8a46014"} Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.499740 4834 generic.go:334] "Generic (PLEG): container finished" podID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerID="ca5c3f15cee86c56029b8a65de022dfec1c6d903bf126fa9ad348a526be4fbbe" exitCode=137 Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.500248 4834 generic.go:334] "Generic (PLEG): container finished" podID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerID="f334e13796776a405d7b41c1c64bc7237ef3672440bbb3a1de1da93d69a8713b" exitCode=137 Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.499981 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5865df457c-plm9x" event={"ID":"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a","Type":"ContainerDied","Data":"ca5c3f15cee86c56029b8a65de022dfec1c6d903bf126fa9ad348a526be4fbbe"} Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.500326 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5865df457c-plm9x" event={"ID":"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a","Type":"ContainerDied","Data":"f334e13796776a405d7b41c1c64bc7237ef3672440bbb3a1de1da93d69a8713b"} Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.500367 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5865df457c-plm9x" event={"ID":"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a","Type":"ContainerDied","Data":"118786a4c90a656ad549eeb196b769f743a0ef686849b70c26af5464d4e9aa35"} Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.500383 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="118786a4c90a656ad549eeb196b769f743a0ef686849b70c26af5464d4e9aa35" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.515376 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.640181 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-scripts\") pod \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.640787 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-config-data\") pod \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.640879 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-horizon-secret-key\") pod \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.640902 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggkhh\" (UniqueName: \"kubernetes.io/projected/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-kube-api-access-ggkhh\") pod \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.640983 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-logs\") pod \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\" (UID: \"830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a\") " Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.732185 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-756657db5-vkplz" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.713743 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-logs" (OuterVolumeSpecName: "logs") pod "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" (UID: "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.804570 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" (UID: "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.806947 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-kube-api-access-ggkhh" (OuterVolumeSpecName: "kube-api-access-ggkhh") pod "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" (UID: "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a"). InnerVolumeSpecName "kube-api-access-ggkhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.824448 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-scripts" (OuterVolumeSpecName: "scripts") pod "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" (UID: "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.827012 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.827039 4834 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.827049 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggkhh\" (UniqueName: \"kubernetes.io/projected/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-kube-api-access-ggkhh\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.827057 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.829461 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-config-data" (OuterVolumeSpecName: "config-data") pod "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" (UID: "830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:12:46 crc kubenswrapper[4834]: I0121 16:12:46.928910 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:47 crc kubenswrapper[4834]: I0121 16:12:47.508209 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5865df457c-plm9x" Jan 21 16:12:47 crc kubenswrapper[4834]: I0121 16:12:47.548064 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5865df457c-plm9x"] Jan 21 16:12:47 crc kubenswrapper[4834]: I0121 16:12:47.559106 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5865df457c-plm9x"] Jan 21 16:12:48 crc kubenswrapper[4834]: I0121 16:12:48.335505 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" path="/var/lib/kubelet/pods/830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a/volumes" Jan 21 16:12:56 crc kubenswrapper[4834]: I0121 16:12:56.727698 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-756657db5-vkplz" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.003834 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xpbgf"] Jan 21 16:12:59 crc kubenswrapper[4834]: E0121 16:12:59.004662 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerName="horizon-log" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.004678 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerName="horizon-log" Jan 21 16:12:59 crc kubenswrapper[4834]: E0121 16:12:59.004717 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerName="horizon" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.004725 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerName="horizon" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.004945 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerName="horizon-log" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.004970 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="830bbb17-8ddd-4e9a-9bce-0ed4aa1e626a" containerName="horizon" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.006509 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.012296 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpbgf"] Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.112705 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgbnb\" (UniqueName: \"kubernetes.io/projected/93193206-8da2-4db8-a06e-ac551fafba93-kube-api-access-fgbnb\") pod \"certified-operators-xpbgf\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.112776 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-utilities\") pod \"certified-operators-xpbgf\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.112823 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-catalog-content\") pod \"certified-operators-xpbgf\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.214595 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgbnb\" (UniqueName: \"kubernetes.io/projected/93193206-8da2-4db8-a06e-ac551fafba93-kube-api-access-fgbnb\") pod \"certified-operators-xpbgf\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.214962 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-utilities\") pod \"certified-operators-xpbgf\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.215087 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-catalog-content\") pod \"certified-operators-xpbgf\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.215474 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-utilities\") pod \"certified-operators-xpbgf\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.215629 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-catalog-content\") pod \"certified-operators-xpbgf\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.238271 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgbnb\" (UniqueName: \"kubernetes.io/projected/93193206-8da2-4db8-a06e-ac551fafba93-kube-api-access-fgbnb\") pod \"certified-operators-xpbgf\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.354049 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:12:59 crc kubenswrapper[4834]: I0121 16:12:59.824977 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpbgf"] Jan 21 16:13:00 crc kubenswrapper[4834]: I0121 16:13:00.640869 4834 generic.go:334] "Generic (PLEG): container finished" podID="93193206-8da2-4db8-a06e-ac551fafba93" containerID="fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805" exitCode=0 Jan 21 16:13:00 crc kubenswrapper[4834]: I0121 16:13:00.640975 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpbgf" event={"ID":"93193206-8da2-4db8-a06e-ac551fafba93","Type":"ContainerDied","Data":"fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805"} Jan 21 16:13:00 crc kubenswrapper[4834]: I0121 16:13:00.641231 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpbgf" event={"ID":"93193206-8da2-4db8-a06e-ac551fafba93","Type":"ContainerStarted","Data":"e8942c581ed2ca919718b07cda1feec20a3fd51729d8b79cdb7d73459cf624f2"} Jan 21 16:13:00 crc kubenswrapper[4834]: I0121 16:13:00.643290 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:13:02 crc kubenswrapper[4834]: I0121 16:13:02.669726 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpbgf" event={"ID":"93193206-8da2-4db8-a06e-ac551fafba93","Type":"ContainerStarted","Data":"acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7"} Jan 21 16:13:03 crc kubenswrapper[4834]: I0121 16:13:03.682864 4834 generic.go:334] "Generic (PLEG): container finished" podID="93193206-8da2-4db8-a06e-ac551fafba93" containerID="acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7" exitCode=0 Jan 21 16:13:03 crc kubenswrapper[4834]: I0121 16:13:03.682939 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpbgf" event={"ID":"93193206-8da2-4db8-a06e-ac551fafba93","Type":"ContainerDied","Data":"acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7"} Jan 21 16:13:04 crc kubenswrapper[4834]: I0121 16:13:04.694266 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpbgf" event={"ID":"93193206-8da2-4db8-a06e-ac551fafba93","Type":"ContainerStarted","Data":"c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87"} Jan 21 16:13:04 crc kubenswrapper[4834]: I0121 16:13:04.711985 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xpbgf" podStartSLOduration=3.29400368 podStartE2EDuration="6.711965392s" podCreationTimestamp="2026-01-21 16:12:58 +0000 UTC" firstStartedPulling="2026-01-21 16:13:00.642942833 +0000 UTC m=+6126.617291878" lastFinishedPulling="2026-01-21 16:13:04.060904545 +0000 UTC m=+6130.035253590" observedRunningTime="2026-01-21 16:13:04.710449095 +0000 UTC m=+6130.684798140" watchObservedRunningTime="2026-01-21 16:13:04.711965392 +0000 UTC m=+6130.686314437" Jan 21 16:13:06 crc kubenswrapper[4834]: I0121 16:13:06.728162 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-756657db5-vkplz" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.107:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8080: connect: connection refused" Jan 21 16:13:06 crc kubenswrapper[4834]: I0121 16:13:06.728636 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-756657db5-vkplz" Jan 21 16:13:09 crc kubenswrapper[4834]: I0121 16:13:09.354750 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:13:09 crc kubenswrapper[4834]: I0121 16:13:09.354942 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:13:09 crc kubenswrapper[4834]: I0121 16:13:09.403088 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:13:09 crc kubenswrapper[4834]: I0121 16:13:09.536024 4834 scope.go:117] "RemoveContainer" containerID="9cea04e4444aa554639eed21083f5131f8a01c55c5bdcfc705163dbab58b1206" Jan 21 16:13:09 crc kubenswrapper[4834]: I0121 16:13:09.590205 4834 scope.go:117] "RemoveContainer" containerID="32cae23987f49c8418736bcacd9f057d58490b456247e344b4ac373eb60f7048" Jan 21 16:13:09 crc kubenswrapper[4834]: I0121 16:13:09.619229 4834 scope.go:117] "RemoveContainer" containerID="8f6cc37133e8e2cc90631789e5433ee50fef3ae3ccd9cf95a82f788208dda0bc" Jan 21 16:13:09 crc kubenswrapper[4834]: I0121 16:13:09.668270 4834 scope.go:117] "RemoveContainer" containerID="d678245dce90e4be55b7e378d1e432cf606ea3fd6b7a6e04c6064d731d2eaf29" Jan 21 16:13:09 crc kubenswrapper[4834]: I0121 16:13:09.714743 4834 scope.go:117] "RemoveContainer" containerID="d21af324c2a5f5bd84ae0a8c47859ee9aa0d9404b8f1d57acf8af59ce5e32608" Jan 21 16:13:09 crc kubenswrapper[4834]: I0121 16:13:09.786496 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.766715 4834 generic.go:334] "Generic (PLEG): container finished" podID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerID="f717a62a5a73125c73be144d418ee5bdc91381ab1b16df06a342385bd203a91f" exitCode=137 Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.768016 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-756657db5-vkplz" event={"ID":"756c8c5a-5683-472c-9717-d3b2ddd3efc3","Type":"ContainerDied","Data":"f717a62a5a73125c73be144d418ee5bdc91381ab1b16df06a342385bd203a91f"} Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.768041 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-756657db5-vkplz" event={"ID":"756c8c5a-5683-472c-9717-d3b2ddd3efc3","Type":"ContainerDied","Data":"cc7b8a3b789c2bebe0a579d6c386d72a0d7773faee4b8f42588c17aa3d49cfb3"} Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.768051 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc7b8a3b789c2bebe0a579d6c386d72a0d7773faee4b8f42588c17aa3d49cfb3" Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.818000 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-756657db5-vkplz" Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.972864 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vxtc\" (UniqueName: \"kubernetes.io/projected/756c8c5a-5683-472c-9717-d3b2ddd3efc3-kube-api-access-7vxtc\") pod \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.973389 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/756c8c5a-5683-472c-9717-d3b2ddd3efc3-horizon-secret-key\") pod \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.973468 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-scripts\") pod \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.973645 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-config-data\") pod \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.973682 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756c8c5a-5683-472c-9717-d3b2ddd3efc3-logs\") pod \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\" (UID: \"756c8c5a-5683-472c-9717-d3b2ddd3efc3\") " Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.974481 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/756c8c5a-5683-472c-9717-d3b2ddd3efc3-logs" (OuterVolumeSpecName: "logs") pod "756c8c5a-5683-472c-9717-d3b2ddd3efc3" (UID: "756c8c5a-5683-472c-9717-d3b2ddd3efc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.978814 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/756c8c5a-5683-472c-9717-d3b2ddd3efc3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "756c8c5a-5683-472c-9717-d3b2ddd3efc3" (UID: "756c8c5a-5683-472c-9717-d3b2ddd3efc3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.979055 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756c8c5a-5683-472c-9717-d3b2ddd3efc3-kube-api-access-7vxtc" (OuterVolumeSpecName: "kube-api-access-7vxtc") pod "756c8c5a-5683-472c-9717-d3b2ddd3efc3" (UID: "756c8c5a-5683-472c-9717-d3b2ddd3efc3"). InnerVolumeSpecName "kube-api-access-7vxtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.998273 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-config-data" (OuterVolumeSpecName: "config-data") pod "756c8c5a-5683-472c-9717-d3b2ddd3efc3" (UID: "756c8c5a-5683-472c-9717-d3b2ddd3efc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:11 crc kubenswrapper[4834]: I0121 16:13:11.999035 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-scripts" (OuterVolumeSpecName: "scripts") pod "756c8c5a-5683-472c-9717-d3b2ddd3efc3" (UID: "756c8c5a-5683-472c-9717-d3b2ddd3efc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.075732 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vxtc\" (UniqueName: \"kubernetes.io/projected/756c8c5a-5683-472c-9717-d3b2ddd3efc3-kube-api-access-7vxtc\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.075765 4834 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/756c8c5a-5683-472c-9717-d3b2ddd3efc3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.075775 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.075783 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/756c8c5a-5683-472c-9717-d3b2ddd3efc3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.075792 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/756c8c5a-5683-472c-9717-d3b2ddd3efc3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.779796 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-756657db5-vkplz" Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.833717 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-756657db5-vkplz"] Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.843463 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-756657db5-vkplz"] Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.853097 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpbgf"] Jan 21 16:13:12 crc kubenswrapper[4834]: I0121 16:13:12.853377 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xpbgf" podUID="93193206-8da2-4db8-a06e-ac551fafba93" containerName="registry-server" containerID="cri-o://c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87" gracePeriod=2 Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.392300 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.501703 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgbnb\" (UniqueName: \"kubernetes.io/projected/93193206-8da2-4db8-a06e-ac551fafba93-kube-api-access-fgbnb\") pod \"93193206-8da2-4db8-a06e-ac551fafba93\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.501808 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-catalog-content\") pod \"93193206-8da2-4db8-a06e-ac551fafba93\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.501846 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-utilities\") pod \"93193206-8da2-4db8-a06e-ac551fafba93\" (UID: \"93193206-8da2-4db8-a06e-ac551fafba93\") " Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.503104 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-utilities" (OuterVolumeSpecName: "utilities") pod "93193206-8da2-4db8-a06e-ac551fafba93" (UID: "93193206-8da2-4db8-a06e-ac551fafba93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.506727 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93193206-8da2-4db8-a06e-ac551fafba93-kube-api-access-fgbnb" (OuterVolumeSpecName: "kube-api-access-fgbnb") pod "93193206-8da2-4db8-a06e-ac551fafba93" (UID: "93193206-8da2-4db8-a06e-ac551fafba93"). InnerVolumeSpecName "kube-api-access-fgbnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.545867 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93193206-8da2-4db8-a06e-ac551fafba93" (UID: "93193206-8da2-4db8-a06e-ac551fafba93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.604822 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgbnb\" (UniqueName: \"kubernetes.io/projected/93193206-8da2-4db8-a06e-ac551fafba93-kube-api-access-fgbnb\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.604859 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.604871 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93193206-8da2-4db8-a06e-ac551fafba93-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.793464 4834 generic.go:334] "Generic (PLEG): container finished" podID="93193206-8da2-4db8-a06e-ac551fafba93" containerID="c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87" exitCode=0 Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.793529 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpbgf" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.793531 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpbgf" event={"ID":"93193206-8da2-4db8-a06e-ac551fafba93","Type":"ContainerDied","Data":"c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87"} Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.793998 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpbgf" event={"ID":"93193206-8da2-4db8-a06e-ac551fafba93","Type":"ContainerDied","Data":"e8942c581ed2ca919718b07cda1feec20a3fd51729d8b79cdb7d73459cf624f2"} Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.794051 4834 scope.go:117] "RemoveContainer" containerID="c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.815571 4834 scope.go:117] "RemoveContainer" containerID="acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.843163 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpbgf"] Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.858174 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xpbgf"] Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.869018 4834 scope.go:117] "RemoveContainer" containerID="fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.893431 4834 scope.go:117] "RemoveContainer" containerID="c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87" Jan 21 16:13:13 crc kubenswrapper[4834]: E0121 16:13:13.893944 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87\": container with ID starting with c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87 not found: ID does not exist" containerID="c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.893999 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87"} err="failed to get container status \"c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87\": rpc error: code = NotFound desc = could not find container \"c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87\": container with ID starting with c363c04a119458c06533a55ebe3482eee78abb8cddc17e1c77b7d53f05c81c87 not found: ID does not exist" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.894031 4834 scope.go:117] "RemoveContainer" containerID="acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7" Jan 21 16:13:13 crc kubenswrapper[4834]: E0121 16:13:13.894417 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7\": container with ID starting with acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7 not found: ID does not exist" containerID="acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.894449 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7"} err="failed to get container status \"acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7\": rpc error: code = NotFound desc = could not find container \"acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7\": container with ID starting with acd0292172975658831994a818b9be98e6b2a056e4b99bf01a02396aafe5ddf7 not found: ID does not exist" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.894469 4834 scope.go:117] "RemoveContainer" containerID="fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805" Jan 21 16:13:13 crc kubenswrapper[4834]: E0121 16:13:13.894838 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805\": container with ID starting with fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805 not found: ID does not exist" containerID="fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805" Jan 21 16:13:13 crc kubenswrapper[4834]: I0121 16:13:13.894910 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805"} err="failed to get container status \"fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805\": rpc error: code = NotFound desc = could not find container \"fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805\": container with ID starting with fe8476d84efab75ed82ac202964b791f28fac3f79cd53f2c4d17c08d8c9f0805 not found: ID does not exist" Jan 21 16:13:14 crc kubenswrapper[4834]: I0121 16:13:14.337803 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" path="/var/lib/kubelet/pods/756c8c5a-5683-472c-9717-d3b2ddd3efc3/volumes" Jan 21 16:13:14 crc kubenswrapper[4834]: I0121 16:13:14.339082 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93193206-8da2-4db8-a06e-ac551fafba93" path="/var/lib/kubelet/pods/93193206-8da2-4db8-a06e-ac551fafba93/volumes" Jan 21 16:13:17 crc kubenswrapper[4834]: I0121 16:13:17.114673 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:13:17 crc kubenswrapper[4834]: I0121 16:13:17.115391 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:19 crc kubenswrapper[4834]: I0121 16:13:19.039248 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ed6b-account-create-update-vbnjd"] Jan 21 16:13:19 crc kubenswrapper[4834]: I0121 16:13:19.049017 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-695hd"] Jan 21 16:13:19 crc kubenswrapper[4834]: I0121 16:13:19.057650 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-695hd"] Jan 21 16:13:19 crc kubenswrapper[4834]: I0121 16:13:19.068522 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ed6b-account-create-update-vbnjd"] Jan 21 16:13:20 crc kubenswrapper[4834]: I0121 16:13:20.337509 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e4edd20-e9b0-4004-a8d4-fd17cf05c32c" path="/var/lib/kubelet/pods/7e4edd20-e9b0-4004-a8d4-fd17cf05c32c/volumes" Jan 21 16:13:20 crc kubenswrapper[4834]: I0121 16:13:20.339118 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b70e61df-6106-48d9-a3ee-25636d71694e" path="/var/lib/kubelet/pods/b70e61df-6106-48d9-a3ee-25636d71694e/volumes" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.008506 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6786d6bdf9-hpfpt"] Jan 21 16:13:24 crc kubenswrapper[4834]: E0121 16:13:24.009476 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.009489 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon" Jan 21 16:13:24 crc kubenswrapper[4834]: E0121 16:13:24.009500 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon-log" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.009507 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon-log" Jan 21 16:13:24 crc kubenswrapper[4834]: E0121 16:13:24.009524 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93193206-8da2-4db8-a06e-ac551fafba93" containerName="extract-utilities" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.009530 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="93193206-8da2-4db8-a06e-ac551fafba93" containerName="extract-utilities" Jan 21 16:13:24 crc kubenswrapper[4834]: E0121 16:13:24.009545 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93193206-8da2-4db8-a06e-ac551fafba93" containerName="registry-server" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.009550 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="93193206-8da2-4db8-a06e-ac551fafba93" containerName="registry-server" Jan 21 16:13:24 crc kubenswrapper[4834]: E0121 16:13:24.009573 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93193206-8da2-4db8-a06e-ac551fafba93" containerName="extract-content" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.009579 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="93193206-8da2-4db8-a06e-ac551fafba93" containerName="extract-content" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.009737 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon-log" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.009759 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="756c8c5a-5683-472c-9717-d3b2ddd3efc3" containerName="horizon" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.009771 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="93193206-8da2-4db8-a06e-ac551fafba93" containerName="registry-server" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.010775 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.023301 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6786d6bdf9-hpfpt"] Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.095126 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-scripts\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.095430 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvz9r\" (UniqueName: \"kubernetes.io/projected/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-kube-api-access-wvz9r\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.095621 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-logs\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.095807 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-config-data\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.096221 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-horizon-secret-key\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.198488 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-horizon-secret-key\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.198601 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-scripts\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.198627 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvz9r\" (UniqueName: \"kubernetes.io/projected/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-kube-api-access-wvz9r\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.198666 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-logs\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.198712 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-config-data\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.200131 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-logs\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.200572 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-scripts\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.201962 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-config-data\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.208433 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-horizon-secret-key\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.219706 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvz9r\" (UniqueName: \"kubernetes.io/projected/7432cb83-a1c7-4a08-ad0a-f6690e07aee2-kube-api-access-wvz9r\") pod \"horizon-6786d6bdf9-hpfpt\" (UID: \"7432cb83-a1c7-4a08-ad0a-f6690e07aee2\") " pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.360278 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:24 crc kubenswrapper[4834]: I0121 16:13:24.809102 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6786d6bdf9-hpfpt"] Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.036835 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6786d6bdf9-hpfpt" event={"ID":"7432cb83-a1c7-4a08-ad0a-f6690e07aee2","Type":"ContainerStarted","Data":"8e3aba4a43642e6c6b23f65f80078a4fba7633ff9010fdcd9cca3895d60516cb"} Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.037310 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6786d6bdf9-hpfpt" event={"ID":"7432cb83-a1c7-4a08-ad0a-f6690e07aee2","Type":"ContainerStarted","Data":"319da247d8e58635b5667da4e1b0d76273e2ffd4f6fb29d72988e1765cab4aca"} Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.582617 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-r9qcd"] Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.584123 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.593201 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-r9qcd"] Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.689020 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-f7d2-account-create-update-7fkw2"] Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.690652 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.697939 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.725391 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f7d2-account-create-update-7fkw2"] Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.737180 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62501e92-7085-47b3-82b5-8df0e62730cb-operator-scripts\") pod \"heat-db-create-r9qcd\" (UID: \"62501e92-7085-47b3-82b5-8df0e62730cb\") " pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.737234 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vdtq\" (UniqueName: \"kubernetes.io/projected/62501e92-7085-47b3-82b5-8df0e62730cb-kube-api-access-5vdtq\") pod \"heat-db-create-r9qcd\" (UID: \"62501e92-7085-47b3-82b5-8df0e62730cb\") " pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.839179 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-operator-scripts\") pod \"heat-f7d2-account-create-update-7fkw2\" (UID: \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\") " pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.839331 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85xwj\" (UniqueName: \"kubernetes.io/projected/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-kube-api-access-85xwj\") pod \"heat-f7d2-account-create-update-7fkw2\" (UID: \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\") " pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.839372 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62501e92-7085-47b3-82b5-8df0e62730cb-operator-scripts\") pod \"heat-db-create-r9qcd\" (UID: \"62501e92-7085-47b3-82b5-8df0e62730cb\") " pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.839393 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vdtq\" (UniqueName: \"kubernetes.io/projected/62501e92-7085-47b3-82b5-8df0e62730cb-kube-api-access-5vdtq\") pod \"heat-db-create-r9qcd\" (UID: \"62501e92-7085-47b3-82b5-8df0e62730cb\") " pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.840605 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62501e92-7085-47b3-82b5-8df0e62730cb-operator-scripts\") pod \"heat-db-create-r9qcd\" (UID: \"62501e92-7085-47b3-82b5-8df0e62730cb\") " pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.857654 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vdtq\" (UniqueName: \"kubernetes.io/projected/62501e92-7085-47b3-82b5-8df0e62730cb-kube-api-access-5vdtq\") pod \"heat-db-create-r9qcd\" (UID: \"62501e92-7085-47b3-82b5-8df0e62730cb\") " pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.941493 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85xwj\" (UniqueName: \"kubernetes.io/projected/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-kube-api-access-85xwj\") pod \"heat-f7d2-account-create-update-7fkw2\" (UID: \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\") " pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.942257 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-operator-scripts\") pod \"heat-f7d2-account-create-update-7fkw2\" (UID: \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\") " pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.943061 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-operator-scripts\") pod \"heat-f7d2-account-create-update-7fkw2\" (UID: \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\") " pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.944649 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:25 crc kubenswrapper[4834]: I0121 16:13:25.970481 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85xwj\" (UniqueName: \"kubernetes.io/projected/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-kube-api-access-85xwj\") pod \"heat-f7d2-account-create-update-7fkw2\" (UID: \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\") " pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:26 crc kubenswrapper[4834]: I0121 16:13:26.019407 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:26 crc kubenswrapper[4834]: I0121 16:13:26.058612 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6786d6bdf9-hpfpt" event={"ID":"7432cb83-a1c7-4a08-ad0a-f6690e07aee2","Type":"ContainerStarted","Data":"569b3015d62158c0f4b942112911858b54683cd8a451b14a46831f92e9bae68e"} Jan 21 16:13:26 crc kubenswrapper[4834]: I0121 16:13:26.439611 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6786d6bdf9-hpfpt" podStartSLOduration=3.439592139 podStartE2EDuration="3.439592139s" podCreationTimestamp="2026-01-21 16:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:26.079633737 +0000 UTC m=+6152.053982792" watchObservedRunningTime="2026-01-21 16:13:26.439592139 +0000 UTC m=+6152.413941184" Jan 21 16:13:26 crc kubenswrapper[4834]: I0121 16:13:26.444547 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-r9qcd"] Jan 21 16:13:26 crc kubenswrapper[4834]: W0121 16:13:26.451215 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62501e92_7085_47b3_82b5_8df0e62730cb.slice/crio-aedfdc26f633f6b04a7a8c3572054d1cb443bfbb6cb71c634775dee7d0a03339 WatchSource:0}: Error finding container aedfdc26f633f6b04a7a8c3572054d1cb443bfbb6cb71c634775dee7d0a03339: Status 404 returned error can't find the container with id aedfdc26f633f6b04a7a8c3572054d1cb443bfbb6cb71c634775dee7d0a03339 Jan 21 16:13:26 crc kubenswrapper[4834]: I0121 16:13:26.557473 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-f7d2-account-create-update-7fkw2"] Jan 21 16:13:26 crc kubenswrapper[4834]: W0121 16:13:26.557752 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f3c58b_1b3f_43bc_9fc4_9f9f7f58bee3.slice/crio-879cc1423addd1134026a006e7be58bbcf2cbe0b0d3fd8a3b15c2460075e4547 WatchSource:0}: Error finding container 879cc1423addd1134026a006e7be58bbcf2cbe0b0d3fd8a3b15c2460075e4547: Status 404 returned error can't find the container with id 879cc1423addd1134026a006e7be58bbcf2cbe0b0d3fd8a3b15c2460075e4547 Jan 21 16:13:27 crc kubenswrapper[4834]: I0121 16:13:27.069346 4834 generic.go:334] "Generic (PLEG): container finished" podID="62501e92-7085-47b3-82b5-8df0e62730cb" containerID="0457c02e913e95705d27c31d13a4e38fcd78ca6cf7fc8bc8da0752143b6de5f3" exitCode=0 Jan 21 16:13:27 crc kubenswrapper[4834]: I0121 16:13:27.069574 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-r9qcd" event={"ID":"62501e92-7085-47b3-82b5-8df0e62730cb","Type":"ContainerDied","Data":"0457c02e913e95705d27c31d13a4e38fcd78ca6cf7fc8bc8da0752143b6de5f3"} Jan 21 16:13:27 crc kubenswrapper[4834]: I0121 16:13:27.069668 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-r9qcd" event={"ID":"62501e92-7085-47b3-82b5-8df0e62730cb","Type":"ContainerStarted","Data":"aedfdc26f633f6b04a7a8c3572054d1cb443bfbb6cb71c634775dee7d0a03339"} Jan 21 16:13:27 crc kubenswrapper[4834]: I0121 16:13:27.071334 4834 generic.go:334] "Generic (PLEG): container finished" podID="e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3" containerID="84c3df88a7734c2e51a4eb9f6943111a9c22acc25833c352c000d578c29047f7" exitCode=0 Jan 21 16:13:27 crc kubenswrapper[4834]: I0121 16:13:27.071508 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f7d2-account-create-update-7fkw2" event={"ID":"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3","Type":"ContainerDied","Data":"84c3df88a7734c2e51a4eb9f6943111a9c22acc25833c352c000d578c29047f7"} Jan 21 16:13:27 crc kubenswrapper[4834]: I0121 16:13:27.071629 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f7d2-account-create-update-7fkw2" event={"ID":"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3","Type":"ContainerStarted","Data":"879cc1423addd1134026a006e7be58bbcf2cbe0b0d3fd8a3b15c2460075e4547"} Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.575343 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.582386 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.705480 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-operator-scripts\") pod \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\" (UID: \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\") " Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.705542 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vdtq\" (UniqueName: \"kubernetes.io/projected/62501e92-7085-47b3-82b5-8df0e62730cb-kube-api-access-5vdtq\") pod \"62501e92-7085-47b3-82b5-8df0e62730cb\" (UID: \"62501e92-7085-47b3-82b5-8df0e62730cb\") " Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.706259 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3" (UID: "e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.706584 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62501e92-7085-47b3-82b5-8df0e62730cb-operator-scripts\") pod \"62501e92-7085-47b3-82b5-8df0e62730cb\" (UID: \"62501e92-7085-47b3-82b5-8df0e62730cb\") " Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.706794 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85xwj\" (UniqueName: \"kubernetes.io/projected/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-kube-api-access-85xwj\") pod \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\" (UID: \"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3\") " Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.707001 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62501e92-7085-47b3-82b5-8df0e62730cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62501e92-7085-47b3-82b5-8df0e62730cb" (UID: "62501e92-7085-47b3-82b5-8df0e62730cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.707564 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.707759 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62501e92-7085-47b3-82b5-8df0e62730cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.711033 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62501e92-7085-47b3-82b5-8df0e62730cb-kube-api-access-5vdtq" (OuterVolumeSpecName: "kube-api-access-5vdtq") pod "62501e92-7085-47b3-82b5-8df0e62730cb" (UID: "62501e92-7085-47b3-82b5-8df0e62730cb"). InnerVolumeSpecName "kube-api-access-5vdtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.711116 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-kube-api-access-85xwj" (OuterVolumeSpecName: "kube-api-access-85xwj") pod "e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3" (UID: "e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3"). InnerVolumeSpecName "kube-api-access-85xwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.809980 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85xwj\" (UniqueName: \"kubernetes.io/projected/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3-kube-api-access-85xwj\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:28 crc kubenswrapper[4834]: I0121 16:13:28.810479 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vdtq\" (UniqueName: \"kubernetes.io/projected/62501e92-7085-47b3-82b5-8df0e62730cb-kube-api-access-5vdtq\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:29 crc kubenswrapper[4834]: I0121 16:13:29.091807 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-r9qcd" event={"ID":"62501e92-7085-47b3-82b5-8df0e62730cb","Type":"ContainerDied","Data":"aedfdc26f633f6b04a7a8c3572054d1cb443bfbb6cb71c634775dee7d0a03339"} Jan 21 16:13:29 crc kubenswrapper[4834]: I0121 16:13:29.091844 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aedfdc26f633f6b04a7a8c3572054d1cb443bfbb6cb71c634775dee7d0a03339" Jan 21 16:13:29 crc kubenswrapper[4834]: I0121 16:13:29.091894 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-r9qcd" Jan 21 16:13:29 crc kubenswrapper[4834]: I0121 16:13:29.100117 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-f7d2-account-create-update-7fkw2" event={"ID":"e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3","Type":"ContainerDied","Data":"879cc1423addd1134026a006e7be58bbcf2cbe0b0d3fd8a3b15c2460075e4547"} Jan 21 16:13:29 crc kubenswrapper[4834]: I0121 16:13:29.100162 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="879cc1423addd1134026a006e7be58bbcf2cbe0b0d3fd8a3b15c2460075e4547" Jan 21 16:13:29 crc kubenswrapper[4834]: I0121 16:13:29.100578 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-f7d2-account-create-update-7fkw2" Jan 21 16:13:30 crc kubenswrapper[4834]: I0121 16:13:30.888596 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-gq75p"] Jan 21 16:13:30 crc kubenswrapper[4834]: E0121 16:13:30.890012 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62501e92-7085-47b3-82b5-8df0e62730cb" containerName="mariadb-database-create" Jan 21 16:13:30 crc kubenswrapper[4834]: I0121 16:13:30.890033 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="62501e92-7085-47b3-82b5-8df0e62730cb" containerName="mariadb-database-create" Jan 21 16:13:30 crc kubenswrapper[4834]: E0121 16:13:30.890071 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3" containerName="mariadb-account-create-update" Jan 21 16:13:30 crc kubenswrapper[4834]: I0121 16:13:30.890079 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3" containerName="mariadb-account-create-update" Jan 21 16:13:30 crc kubenswrapper[4834]: I0121 16:13:30.890311 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3" containerName="mariadb-account-create-update" Jan 21 16:13:30 crc kubenswrapper[4834]: I0121 16:13:30.890333 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="62501e92-7085-47b3-82b5-8df0e62730cb" containerName="mariadb-database-create" Jan 21 16:13:30 crc kubenswrapper[4834]: I0121 16:13:30.891297 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:30 crc kubenswrapper[4834]: I0121 16:13:30.894447 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-pz4sq" Jan 21 16:13:30 crc kubenswrapper[4834]: I0121 16:13:30.897635 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 21 16:13:30 crc kubenswrapper[4834]: I0121 16:13:30.908127 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gq75p"] Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.036443 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-slrrx"] Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.050407 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-slrrx"] Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.055517 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9jx8\" (UniqueName: \"kubernetes.io/projected/c4f11ed7-1733-4047-83a5-51863b053069-kube-api-access-t9jx8\") pod \"heat-db-sync-gq75p\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.055588 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-combined-ca-bundle\") pod \"heat-db-sync-gq75p\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.055831 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-config-data\") pod \"heat-db-sync-gq75p\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.157106 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-combined-ca-bundle\") pod \"heat-db-sync-gq75p\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.157270 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-config-data\") pod \"heat-db-sync-gq75p\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.157325 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9jx8\" (UniqueName: \"kubernetes.io/projected/c4f11ed7-1733-4047-83a5-51863b053069-kube-api-access-t9jx8\") pod \"heat-db-sync-gq75p\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.165687 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-combined-ca-bundle\") pod \"heat-db-sync-gq75p\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.168349 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-config-data\") pod \"heat-db-sync-gq75p\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.177741 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9jx8\" (UniqueName: \"kubernetes.io/projected/c4f11ed7-1733-4047-83a5-51863b053069-kube-api-access-t9jx8\") pod \"heat-db-sync-gq75p\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.210898 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:31 crc kubenswrapper[4834]: I0121 16:13:31.766608 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gq75p"] Jan 21 16:13:31 crc kubenswrapper[4834]: W0121 16:13:31.789992 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4f11ed7_1733_4047_83a5_51863b053069.slice/crio-8f124cfca2a78a93a3cf04179e544219aa0d01df4109e6852c9976be07553cc2 WatchSource:0}: Error finding container 8f124cfca2a78a93a3cf04179e544219aa0d01df4109e6852c9976be07553cc2: Status 404 returned error can't find the container with id 8f124cfca2a78a93a3cf04179e544219aa0d01df4109e6852c9976be07553cc2 Jan 21 16:13:32 crc kubenswrapper[4834]: I0121 16:13:32.127125 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gq75p" event={"ID":"c4f11ed7-1733-4047-83a5-51863b053069","Type":"ContainerStarted","Data":"8f124cfca2a78a93a3cf04179e544219aa0d01df4109e6852c9976be07553cc2"} Jan 21 16:13:32 crc kubenswrapper[4834]: I0121 16:13:32.335150 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924f5e33-da1f-4f40-a335-a634fe4fc218" path="/var/lib/kubelet/pods/924f5e33-da1f-4f40-a335-a634fe4fc218/volumes" Jan 21 16:13:34 crc kubenswrapper[4834]: I0121 16:13:34.360653 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:34 crc kubenswrapper[4834]: I0121 16:13:34.361039 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:39 crc kubenswrapper[4834]: I0121 16:13:39.213197 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gq75p" event={"ID":"c4f11ed7-1733-4047-83a5-51863b053069","Type":"ContainerStarted","Data":"0ebd3426a8beeb259b0e96c2f25afd824d3cb13191a2a8e0e3934ea43414a6f5"} Jan 21 16:13:39 crc kubenswrapper[4834]: I0121 16:13:39.243041 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-gq75p" podStartSLOduration=3.000149831 podStartE2EDuration="9.243013057s" podCreationTimestamp="2026-01-21 16:13:30 +0000 UTC" firstStartedPulling="2026-01-21 16:13:31.796427957 +0000 UTC m=+6157.770777002" lastFinishedPulling="2026-01-21 16:13:38.039291143 +0000 UTC m=+6164.013640228" observedRunningTime="2026-01-21 16:13:39.230181206 +0000 UTC m=+6165.204530251" watchObservedRunningTime="2026-01-21 16:13:39.243013057 +0000 UTC m=+6165.217362102" Jan 21 16:13:40 crc kubenswrapper[4834]: I0121 16:13:40.222819 4834 generic.go:334] "Generic (PLEG): container finished" podID="c4f11ed7-1733-4047-83a5-51863b053069" containerID="0ebd3426a8beeb259b0e96c2f25afd824d3cb13191a2a8e0e3934ea43414a6f5" exitCode=0 Jan 21 16:13:40 crc kubenswrapper[4834]: I0121 16:13:40.223054 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gq75p" event={"ID":"c4f11ed7-1733-4047-83a5-51863b053069","Type":"ContainerDied","Data":"0ebd3426a8beeb259b0e96c2f25afd824d3cb13191a2a8e0e3934ea43414a6f5"} Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.672724 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.797999 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9jx8\" (UniqueName: \"kubernetes.io/projected/c4f11ed7-1733-4047-83a5-51863b053069-kube-api-access-t9jx8\") pod \"c4f11ed7-1733-4047-83a5-51863b053069\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.798238 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-combined-ca-bundle\") pod \"c4f11ed7-1733-4047-83a5-51863b053069\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.798278 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-config-data\") pod \"c4f11ed7-1733-4047-83a5-51863b053069\" (UID: \"c4f11ed7-1733-4047-83a5-51863b053069\") " Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.803773 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f11ed7-1733-4047-83a5-51863b053069-kube-api-access-t9jx8" (OuterVolumeSpecName: "kube-api-access-t9jx8") pod "c4f11ed7-1733-4047-83a5-51863b053069" (UID: "c4f11ed7-1733-4047-83a5-51863b053069"). InnerVolumeSpecName "kube-api-access-t9jx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.828284 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4f11ed7-1733-4047-83a5-51863b053069" (UID: "c4f11ed7-1733-4047-83a5-51863b053069"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.879420 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-config-data" (OuterVolumeSpecName: "config-data") pod "c4f11ed7-1733-4047-83a5-51863b053069" (UID: "c4f11ed7-1733-4047-83a5-51863b053069"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.900221 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.900445 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f11ed7-1733-4047-83a5-51863b053069-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:41 crc kubenswrapper[4834]: I0121 16:13:41.900549 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9jx8\" (UniqueName: \"kubernetes.io/projected/c4f11ed7-1733-4047-83a5-51863b053069-kube-api-access-t9jx8\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:42 crc kubenswrapper[4834]: I0121 16:13:42.309063 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gq75p" event={"ID":"c4f11ed7-1733-4047-83a5-51863b053069","Type":"ContainerDied","Data":"8f124cfca2a78a93a3cf04179e544219aa0d01df4109e6852c9976be07553cc2"} Jan 21 16:13:42 crc kubenswrapper[4834]: I0121 16:13:42.309115 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f124cfca2a78a93a3cf04179e544219aa0d01df4109e6852c9976be07553cc2" Jan 21 16:13:42 crc kubenswrapper[4834]: I0121 16:13:42.309181 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gq75p" Jan 21 16:13:42 crc kubenswrapper[4834]: E0121 16:13:42.470828 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4f11ed7_1733_4047_83a5_51863b053069.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.468982 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-8576c9d7cb-7mkvb"] Jan 21 16:13:43 crc kubenswrapper[4834]: E0121 16:13:43.469959 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f11ed7-1733-4047-83a5-51863b053069" containerName="heat-db-sync" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.469973 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f11ed7-1733-4047-83a5-51863b053069" containerName="heat-db-sync" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.470196 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f11ed7-1733-4047-83a5-51863b053069" containerName="heat-db-sync" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.470827 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.474653 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.474889 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-pz4sq" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.475081 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.509739 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8576c9d7cb-7mkvb"] Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.533586 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wqt6\" (UniqueName: \"kubernetes.io/projected/2e806442-8e14-4797-8b36-1dc85c99ace6-kube-api-access-7wqt6\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.533685 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e806442-8e14-4797-8b36-1dc85c99ace6-config-data\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.533917 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e806442-8e14-4797-8b36-1dc85c99ace6-config-data-custom\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.534151 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e806442-8e14-4797-8b36-1dc85c99ace6-combined-ca-bundle\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.544820 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7d548bb56b-54t7m"] Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.546214 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.549216 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.556620 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d548bb56b-54t7m"] Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.635972 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wqt6\" (UniqueName: \"kubernetes.io/projected/2e806442-8e14-4797-8b36-1dc85c99ace6-kube-api-access-7wqt6\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.636076 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e806442-8e14-4797-8b36-1dc85c99ace6-config-data\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.636264 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6953411-1bd9-482c-a7c7-37885a856ec3-config-data-custom\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.636324 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p5lf\" (UniqueName: \"kubernetes.io/projected/b6953411-1bd9-482c-a7c7-37885a856ec3-kube-api-access-2p5lf\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.636359 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6953411-1bd9-482c-a7c7-37885a856ec3-combined-ca-bundle\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.636418 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e806442-8e14-4797-8b36-1dc85c99ace6-config-data-custom\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.636467 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6953411-1bd9-482c-a7c7-37885a856ec3-config-data\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.636502 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e806442-8e14-4797-8b36-1dc85c99ace6-combined-ca-bundle\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.654873 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e806442-8e14-4797-8b36-1dc85c99ace6-config-data\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.663907 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e806442-8e14-4797-8b36-1dc85c99ace6-config-data-custom\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.670121 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wqt6\" (UniqueName: \"kubernetes.io/projected/2e806442-8e14-4797-8b36-1dc85c99ace6-kube-api-access-7wqt6\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.670739 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e806442-8e14-4797-8b36-1dc85c99ace6-combined-ca-bundle\") pod \"heat-engine-8576c9d7cb-7mkvb\" (UID: \"2e806442-8e14-4797-8b36-1dc85c99ace6\") " pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.671790 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b7dc4444b-95qgh"] Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.675264 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.679809 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.684280 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b7dc4444b-95qgh"] Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.739152 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5917c3a-9e06-404b-8efc-a52457ca4625-config-data-custom\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.739238 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8qpl\" (UniqueName: \"kubernetes.io/projected/a5917c3a-9e06-404b-8efc-a52457ca4625-kube-api-access-z8qpl\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.739755 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6953411-1bd9-482c-a7c7-37885a856ec3-config-data-custom\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.739912 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p5lf\" (UniqueName: \"kubernetes.io/projected/b6953411-1bd9-482c-a7c7-37885a856ec3-kube-api-access-2p5lf\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.739976 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6953411-1bd9-482c-a7c7-37885a856ec3-combined-ca-bundle\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.740159 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5917c3a-9e06-404b-8efc-a52457ca4625-config-data\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.740220 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6953411-1bd9-482c-a7c7-37885a856ec3-config-data\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.740271 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5917c3a-9e06-404b-8efc-a52457ca4625-combined-ca-bundle\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.852191 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5917c3a-9e06-404b-8efc-a52457ca4625-config-data\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.853035 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5917c3a-9e06-404b-8efc-a52457ca4625-combined-ca-bundle\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.853212 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5917c3a-9e06-404b-8efc-a52457ca4625-config-data-custom\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.853296 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8qpl\" (UniqueName: \"kubernetes.io/projected/a5917c3a-9e06-404b-8efc-a52457ca4625-kube-api-access-z8qpl\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.861541 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6953411-1bd9-482c-a7c7-37885a856ec3-config-data\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.861603 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6953411-1bd9-482c-a7c7-37885a856ec3-config-data-custom\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.861763 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6953411-1bd9-482c-a7c7-37885a856ec3-combined-ca-bundle\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.862622 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.880573 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5917c3a-9e06-404b-8efc-a52457ca4625-config-data-custom\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.881642 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5917c3a-9e06-404b-8efc-a52457ca4625-config-data\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.885741 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5917c3a-9e06-404b-8efc-a52457ca4625-combined-ca-bundle\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.895522 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8qpl\" (UniqueName: \"kubernetes.io/projected/a5917c3a-9e06-404b-8efc-a52457ca4625-kube-api-access-z8qpl\") pod \"heat-cfnapi-6b7dc4444b-95qgh\" (UID: \"a5917c3a-9e06-404b-8efc-a52457ca4625\") " pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:43 crc kubenswrapper[4834]: I0121 16:13:43.901917 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p5lf\" (UniqueName: \"kubernetes.io/projected/b6953411-1bd9-482c-a7c7-37885a856ec3-kube-api-access-2p5lf\") pod \"heat-api-7d548bb56b-54t7m\" (UID: \"b6953411-1bd9-482c-a7c7-37885a856ec3\") " pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:44 crc kubenswrapper[4834]: I0121 16:13:44.085573 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:44 crc kubenswrapper[4834]: I0121 16:13:44.165971 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:44 crc kubenswrapper[4834]: I0121 16:13:44.405712 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6786d6bdf9-hpfpt" podUID="7432cb83-a1c7-4a08-ad0a-f6690e07aee2" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.113:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.113:8080: connect: connection refused" Jan 21 16:13:44 crc kubenswrapper[4834]: I0121 16:13:44.441208 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8576c9d7cb-7mkvb"] Jan 21 16:13:44 crc kubenswrapper[4834]: I0121 16:13:44.755300 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b7dc4444b-95qgh"] Jan 21 16:13:44 crc kubenswrapper[4834]: I0121 16:13:44.863918 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d548bb56b-54t7m"] Jan 21 16:13:45 crc kubenswrapper[4834]: I0121 16:13:45.464397 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" event={"ID":"a5917c3a-9e06-404b-8efc-a52457ca4625","Type":"ContainerStarted","Data":"867635b7d08f8e3516a8683744e274c7cf700c3ce8f7078c02019c80a30cda7c"} Jan 21 16:13:45 crc kubenswrapper[4834]: I0121 16:13:45.468227 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8576c9d7cb-7mkvb" event={"ID":"2e806442-8e14-4797-8b36-1dc85c99ace6","Type":"ContainerStarted","Data":"9daf48056836e4f1d711991bbe7708dbec2a0830917d495ed38ee7c97f639e43"} Jan 21 16:13:45 crc kubenswrapper[4834]: I0121 16:13:45.468280 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8576c9d7cb-7mkvb" event={"ID":"2e806442-8e14-4797-8b36-1dc85c99ace6","Type":"ContainerStarted","Data":"f7b8b2ce232751a2467d910c49713c844fc33bbf168a1cbf29548061f9b5cffa"} Jan 21 16:13:45 crc kubenswrapper[4834]: I0121 16:13:45.470026 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:13:45 crc kubenswrapper[4834]: I0121 16:13:45.478227 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d548bb56b-54t7m" event={"ID":"b6953411-1bd9-482c-a7c7-37885a856ec3","Type":"ContainerStarted","Data":"3a0c4f92ea55101d6f2396aaca7d15ac5b7eff72253d031f6807e67a10445982"} Jan 21 16:13:45 crc kubenswrapper[4834]: I0121 16:13:45.498043 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-8576c9d7cb-7mkvb" podStartSLOduration=2.498019052 podStartE2EDuration="2.498019052s" podCreationTimestamp="2026-01-21 16:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:13:45.489492786 +0000 UTC m=+6171.463841831" watchObservedRunningTime="2026-01-21 16:13:45.498019052 +0000 UTC m=+6171.472368097" Jan 21 16:13:47 crc kubenswrapper[4834]: I0121 16:13:47.113874 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:13:47 crc kubenswrapper[4834]: I0121 16:13:47.114888 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:13:48 crc kubenswrapper[4834]: I0121 16:13:48.534634 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d548bb56b-54t7m" event={"ID":"b6953411-1bd9-482c-a7c7-37885a856ec3","Type":"ContainerStarted","Data":"d293b301219cbf3ecf40e9ad6140fe1cc0cab7315d68d7c7ee4c8accf35f8ad8"} Jan 21 16:13:48 crc kubenswrapper[4834]: I0121 16:13:48.536145 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:48 crc kubenswrapper[4834]: I0121 16:13:48.538713 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" event={"ID":"a5917c3a-9e06-404b-8efc-a52457ca4625","Type":"ContainerStarted","Data":"d68ac9d80de85166c4ed612ce4c4c798758181862fde1bea0a4846d369081b1c"} Jan 21 16:13:48 crc kubenswrapper[4834]: I0121 16:13:48.538868 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:48 crc kubenswrapper[4834]: I0121 16:13:48.560150 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7d548bb56b-54t7m" podStartSLOduration=3.03276016 podStartE2EDuration="5.56012194s" podCreationTimestamp="2026-01-21 16:13:43 +0000 UTC" firstStartedPulling="2026-01-21 16:13:44.883308509 +0000 UTC m=+6170.857657554" lastFinishedPulling="2026-01-21 16:13:47.410670289 +0000 UTC m=+6173.385019334" observedRunningTime="2026-01-21 16:13:48.552494261 +0000 UTC m=+6174.526843306" watchObservedRunningTime="2026-01-21 16:13:48.56012194 +0000 UTC m=+6174.534470985" Jan 21 16:13:48 crc kubenswrapper[4834]: I0121 16:13:48.582079 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" podStartSLOduration=2.9759787380000002 podStartE2EDuration="5.582055874s" podCreationTimestamp="2026-01-21 16:13:43 +0000 UTC" firstStartedPulling="2026-01-21 16:13:44.801034222 +0000 UTC m=+6170.775383267" lastFinishedPulling="2026-01-21 16:13:47.407111358 +0000 UTC m=+6173.381460403" observedRunningTime="2026-01-21 16:13:48.571094292 +0000 UTC m=+6174.545443337" watchObservedRunningTime="2026-01-21 16:13:48.582055874 +0000 UTC m=+6174.556404919" Jan 21 16:13:55 crc kubenswrapper[4834]: I0121 16:13:55.895873 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6b7dc4444b-95qgh" Jan 21 16:13:56 crc kubenswrapper[4834]: I0121 16:13:56.593543 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7d548bb56b-54t7m" Jan 21 16:13:57 crc kubenswrapper[4834]: I0121 16:13:57.271286 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:59 crc kubenswrapper[4834]: I0121 16:13:59.409404 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6786d6bdf9-hpfpt" Jan 21 16:13:59 crc kubenswrapper[4834]: I0121 16:13:59.485411 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ff5d6ccd9-qr9p9"] Jan 21 16:13:59 crc kubenswrapper[4834]: I0121 16:13:59.485668 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ff5d6ccd9-qr9p9" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon-log" containerID="cri-o://ae7bff0d0fa14cf3cca0744de0e2382ae7fa07f5df583e540b4a0e00c0015abf" gracePeriod=30 Jan 21 16:13:59 crc kubenswrapper[4834]: I0121 16:13:59.485805 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ff5d6ccd9-qr9p9" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon" containerID="cri-o://a577e9b48de6ebaf0bbb379aa8e3624a31b81edf08ba7fd55d7e4a9be80bd44d" gracePeriod=30 Jan 21 16:14:03 crc kubenswrapper[4834]: I0121 16:14:03.705923 4834 generic.go:334] "Generic (PLEG): container finished" podID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerID="a577e9b48de6ebaf0bbb379aa8e3624a31b81edf08ba7fd55d7e4a9be80bd44d" exitCode=0 Jan 21 16:14:03 crc kubenswrapper[4834]: I0121 16:14:03.706019 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff5d6ccd9-qr9p9" event={"ID":"948fc85f-d13f-4ae9-a878-27b64972bfb1","Type":"ContainerDied","Data":"a577e9b48de6ebaf0bbb379aa8e3624a31b81edf08ba7fd55d7e4a9be80bd44d"} Jan 21 16:14:03 crc kubenswrapper[4834]: I0121 16:14:03.898400 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-8576c9d7cb-7mkvb" Jan 21 16:14:04 crc kubenswrapper[4834]: I0121 16:14:04.048415 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-jtkvh"] Jan 21 16:14:04 crc kubenswrapper[4834]: I0121 16:14:04.061024 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e44c-account-create-update-rg8tm"] Jan 21 16:14:04 crc kubenswrapper[4834]: I0121 16:14:04.069641 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-jtkvh"] Jan 21 16:14:04 crc kubenswrapper[4834]: I0121 16:14:04.077772 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e44c-account-create-update-rg8tm"] Jan 21 16:14:04 crc kubenswrapper[4834]: I0121 16:14:04.347342 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d12dc70-c2d4-42e8-b021-a500e3f3dabe" path="/var/lib/kubelet/pods/1d12dc70-c2d4-42e8-b021-a500e3f3dabe/volumes" Jan 21 16:14:04 crc kubenswrapper[4834]: I0121 16:14:04.348785 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e078da-a5e4-49c9-8724-560b2354f424" path="/var/lib/kubelet/pods/c0e078da-a5e4-49c9-8724-560b2354f424/volumes" Jan 21 16:14:07 crc kubenswrapper[4834]: I0121 16:14:07.472306 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ff5d6ccd9-qr9p9" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 21 16:14:09 crc kubenswrapper[4834]: I0121 16:14:09.852314 4834 scope.go:117] "RemoveContainer" containerID="855a35d92c641738c1bab78dd7f32c70f4bea2b258e47ed98d7fcda4c377b87e" Jan 21 16:14:09 crc kubenswrapper[4834]: I0121 16:14:09.903097 4834 scope.go:117] "RemoveContainer" containerID="a71ad9235bf3a613cae2f1e141e1cfffcd4df65560cab09adbd772c85d3b0dda" Jan 21 16:14:09 crc kubenswrapper[4834]: I0121 16:14:09.953761 4834 scope.go:117] "RemoveContainer" containerID="7852aedc75177045582c3a4334fd68855c336b593e7b433557af98e331f02aea" Jan 21 16:14:09 crc kubenswrapper[4834]: I0121 16:14:09.990022 4834 scope.go:117] "RemoveContainer" containerID="82005a3d5c913e95e851ea6893308e98bc455fb856413999fb6ebe7bc694e2cf" Jan 21 16:14:10 crc kubenswrapper[4834]: I0121 16:14:10.050040 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-spb6s"] Jan 21 16:14:10 crc kubenswrapper[4834]: I0121 16:14:10.058553 4834 scope.go:117] "RemoveContainer" containerID="d815e341d3fb85bdd25828b0f1850fdb55b0f2f0d389b347afbaf01c0c338145" Jan 21 16:14:10 crc kubenswrapper[4834]: I0121 16:14:10.062995 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-spb6s"] Jan 21 16:14:10 crc kubenswrapper[4834]: I0121 16:14:10.341035 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38472bc9-79fa-40c4-9319-e4999e158433" path="/var/lib/kubelet/pods/38472bc9-79fa-40c4-9319-e4999e158433/volumes" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.297729 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq"] Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.300716 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.305482 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.308484 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq"] Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.318108 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.321871 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.322272 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggdt\" (UniqueName: \"kubernetes.io/projected/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-kube-api-access-zggdt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.425120 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zggdt\" (UniqueName: \"kubernetes.io/projected/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-kube-api-access-zggdt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.425384 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.425538 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.426209 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.426273 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.453557 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zggdt\" (UniqueName: \"kubernetes.io/projected/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-kube-api-access-zggdt\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:13 crc kubenswrapper[4834]: I0121 16:14:13.624141 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:14 crc kubenswrapper[4834]: I0121 16:14:14.082743 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq"] Jan 21 16:14:14 crc kubenswrapper[4834]: I0121 16:14:14.822261 4834 generic.go:334] "Generic (PLEG): container finished" podID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerID="ed93a33fda81aad5677e13c8d5f5545e8761ad304df7c0cbb7aeed0a3d9452fa" exitCode=0 Jan 21 16:14:14 crc kubenswrapper[4834]: I0121 16:14:14.822469 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" event={"ID":"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63","Type":"ContainerDied","Data":"ed93a33fda81aad5677e13c8d5f5545e8761ad304df7c0cbb7aeed0a3d9452fa"} Jan 21 16:14:14 crc kubenswrapper[4834]: I0121 16:14:14.822540 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" event={"ID":"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63","Type":"ContainerStarted","Data":"802e7c65c513fd65bcffd79f954de3e66dfeba8beb1b41dfcb5141d04d405b32"} Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.113744 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.114307 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.114353 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.114990 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32379ce27e78b0554ab6a50c272e2419fe05a1e3480f8360ebd5c3ae33b4df8b"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.115045 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://32379ce27e78b0554ab6a50c272e2419fe05a1e3480f8360ebd5c3ae33b4df8b" gracePeriod=600 Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.473225 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ff5d6ccd9-qr9p9" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.855288 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="32379ce27e78b0554ab6a50c272e2419fe05a1e3480f8360ebd5c3ae33b4df8b" exitCode=0 Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.855390 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"32379ce27e78b0554ab6a50c272e2419fe05a1e3480f8360ebd5c3ae33b4df8b"} Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.855638 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7"} Jan 21 16:14:17 crc kubenswrapper[4834]: I0121 16:14:17.855658 4834 scope.go:117] "RemoveContainer" containerID="cb2bd2c4ea791ba35cf80fda48b43f76bf8e98e264503b6b9055964d8a5659fd" Jan 21 16:14:18 crc kubenswrapper[4834]: I0121 16:14:18.868569 4834 generic.go:334] "Generic (PLEG): container finished" podID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerID="3d9481066b34d4f12fb3691ef92c170fe8b601e6f99098749622f345e3ab5b4f" exitCode=0 Jan 21 16:14:18 crc kubenswrapper[4834]: I0121 16:14:18.868636 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" event={"ID":"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63","Type":"ContainerDied","Data":"3d9481066b34d4f12fb3691ef92c170fe8b601e6f99098749622f345e3ab5b4f"} Jan 21 16:14:19 crc kubenswrapper[4834]: I0121 16:14:19.885694 4834 generic.go:334] "Generic (PLEG): container finished" podID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerID="08fc86d725b10f34ea02e8bff216a119a8e2ff22cc65f26af13db515c29f06ae" exitCode=0 Jan 21 16:14:19 crc kubenswrapper[4834]: I0121 16:14:19.885753 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" event={"ID":"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63","Type":"ContainerDied","Data":"08fc86d725b10f34ea02e8bff216a119a8e2ff22cc65f26af13db515c29f06ae"} Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.243955 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.398520 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-util\") pod \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.398708 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zggdt\" (UniqueName: \"kubernetes.io/projected/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-kube-api-access-zggdt\") pod \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.398739 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-bundle\") pod \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\" (UID: \"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63\") " Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.401333 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-bundle" (OuterVolumeSpecName: "bundle") pod "2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" (UID: "2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.406158 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-util" (OuterVolumeSpecName: "util") pod "2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" (UID: "2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.408096 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-kube-api-access-zggdt" (OuterVolumeSpecName: "kube-api-access-zggdt") pod "2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" (UID: "2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63"). InnerVolumeSpecName "kube-api-access-zggdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.502489 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zggdt\" (UniqueName: \"kubernetes.io/projected/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-kube-api-access-zggdt\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.502524 4834 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.502538 4834 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63-util\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.908414 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" event={"ID":"2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63","Type":"ContainerDied","Data":"802e7c65c513fd65bcffd79f954de3e66dfeba8beb1b41dfcb5141d04d405b32"} Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.908487 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="802e7c65c513fd65bcffd79f954de3e66dfeba8beb1b41dfcb5141d04d405b32" Jan 21 16:14:21 crc kubenswrapper[4834]: I0121 16:14:21.908541 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq" Jan 21 16:14:27 crc kubenswrapper[4834]: I0121 16:14:27.475000 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ff5d6ccd9-qr9p9" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.109:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.109:8080: connect: connection refused" Jan 21 16:14:27 crc kubenswrapper[4834]: I0121 16:14:27.475778 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.055642 4834 generic.go:334] "Generic (PLEG): container finished" podID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerID="ae7bff0d0fa14cf3cca0744de0e2382ae7fa07f5df583e540b4a0e00c0015abf" exitCode=137 Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.056269 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff5d6ccd9-qr9p9" event={"ID":"948fc85f-d13f-4ae9-a878-27b64972bfb1","Type":"ContainerDied","Data":"ae7bff0d0fa14cf3cca0744de0e2382ae7fa07f5df583e540b4a0e00c0015abf"} Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.320621 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.511799 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-scripts\") pod \"948fc85f-d13f-4ae9-a878-27b64972bfb1\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.511877 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-config-data\") pod \"948fc85f-d13f-4ae9-a878-27b64972bfb1\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.511978 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj5w6\" (UniqueName: \"kubernetes.io/projected/948fc85f-d13f-4ae9-a878-27b64972bfb1-kube-api-access-hj5w6\") pod \"948fc85f-d13f-4ae9-a878-27b64972bfb1\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.512034 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/948fc85f-d13f-4ae9-a878-27b64972bfb1-horizon-secret-key\") pod \"948fc85f-d13f-4ae9-a878-27b64972bfb1\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.512080 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948fc85f-d13f-4ae9-a878-27b64972bfb1-logs\") pod \"948fc85f-d13f-4ae9-a878-27b64972bfb1\" (UID: \"948fc85f-d13f-4ae9-a878-27b64972bfb1\") " Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.512666 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948fc85f-d13f-4ae9-a878-27b64972bfb1-logs" (OuterVolumeSpecName: "logs") pod "948fc85f-d13f-4ae9-a878-27b64972bfb1" (UID: "948fc85f-d13f-4ae9-a878-27b64972bfb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.513348 4834 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948fc85f-d13f-4ae9-a878-27b64972bfb1-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.527213 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948fc85f-d13f-4ae9-a878-27b64972bfb1-kube-api-access-hj5w6" (OuterVolumeSpecName: "kube-api-access-hj5w6") pod "948fc85f-d13f-4ae9-a878-27b64972bfb1" (UID: "948fc85f-d13f-4ae9-a878-27b64972bfb1"). InnerVolumeSpecName "kube-api-access-hj5w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.527345 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948fc85f-d13f-4ae9-a878-27b64972bfb1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "948fc85f-d13f-4ae9-a878-27b64972bfb1" (UID: "948fc85f-d13f-4ae9-a878-27b64972bfb1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.545625 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-config-data" (OuterVolumeSpecName: "config-data") pod "948fc85f-d13f-4ae9-a878-27b64972bfb1" (UID: "948fc85f-d13f-4ae9-a878-27b64972bfb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.568472 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-scripts" (OuterVolumeSpecName: "scripts") pod "948fc85f-d13f-4ae9-a878-27b64972bfb1" (UID: "948fc85f-d13f-4ae9-a878-27b64972bfb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.616345 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj5w6\" (UniqueName: \"kubernetes.io/projected/948fc85f-d13f-4ae9-a878-27b64972bfb1-kube-api-access-hj5w6\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.616385 4834 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/948fc85f-d13f-4ae9-a878-27b64972bfb1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.616397 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:30 crc kubenswrapper[4834]: I0121 16:14:30.616409 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/948fc85f-d13f-4ae9-a878-27b64972bfb1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:31 crc kubenswrapper[4834]: I0121 16:14:31.069885 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff5d6ccd9-qr9p9" event={"ID":"948fc85f-d13f-4ae9-a878-27b64972bfb1","Type":"ContainerDied","Data":"38013e77835596c119f81e2ab12b7cf8cb511efa4582fa34a0450d908dc60b53"} Jan 21 16:14:31 crc kubenswrapper[4834]: I0121 16:14:31.070266 4834 scope.go:117] "RemoveContainer" containerID="a577e9b48de6ebaf0bbb379aa8e3624a31b81edf08ba7fd55d7e4a9be80bd44d" Jan 21 16:14:31 crc kubenswrapper[4834]: I0121 16:14:31.069958 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ff5d6ccd9-qr9p9" Jan 21 16:14:31 crc kubenswrapper[4834]: I0121 16:14:31.121556 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ff5d6ccd9-qr9p9"] Jan 21 16:14:31 crc kubenswrapper[4834]: I0121 16:14:31.131544 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-ff5d6ccd9-qr9p9"] Jan 21 16:14:31 crc kubenswrapper[4834]: I0121 16:14:31.278963 4834 scope.go:117] "RemoveContainer" containerID="ae7bff0d0fa14cf3cca0744de0e2382ae7fa07f5df583e540b4a0e00c0015abf" Jan 21 16:14:32 crc kubenswrapper[4834]: I0121 16:14:32.340105 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" path="/var/lib/kubelet/pods/948fc85f-d13f-4ae9-a878-27b64972bfb1/volumes" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.494427 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7"] Jan 21 16:14:33 crc kubenswrapper[4834]: E0121 16:14:33.494984 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon-log" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.495000 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon-log" Jan 21 16:14:33 crc kubenswrapper[4834]: E0121 16:14:33.495018 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerName="extract" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.495025 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerName="extract" Jan 21 16:14:33 crc kubenswrapper[4834]: E0121 16:14:33.495042 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerName="pull" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.495050 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerName="pull" Jan 21 16:14:33 crc kubenswrapper[4834]: E0121 16:14:33.495061 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.495070 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon" Jan 21 16:14:33 crc kubenswrapper[4834]: E0121 16:14:33.495090 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerName="util" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.495098 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerName="util" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.495323 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63" containerName="extract" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.495344 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon-log" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.495361 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="948fc85f-d13f-4ae9-a878-27b64972bfb1" containerName="horizon" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.496221 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.501016 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-44g4c" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.501291 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.501457 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.527629 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7"] Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.587254 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqgtm\" (UniqueName: \"kubernetes.io/projected/89f9bbd9-52c4-4da9-940f-1c6b73caf38a-kube-api-access-bqgtm\") pod \"obo-prometheus-operator-68bc856cb9-7psl7\" (UID: \"89f9bbd9-52c4-4da9-940f-1c6b73caf38a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.689143 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqgtm\" (UniqueName: \"kubernetes.io/projected/89f9bbd9-52c4-4da9-940f-1c6b73caf38a-kube-api-access-bqgtm\") pod \"obo-prometheus-operator-68bc856cb9-7psl7\" (UID: \"89f9bbd9-52c4-4da9-940f-1c6b73caf38a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.719486 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqgtm\" (UniqueName: \"kubernetes.io/projected/89f9bbd9-52c4-4da9-940f-1c6b73caf38a-kube-api-access-bqgtm\") pod \"obo-prometheus-operator-68bc856cb9-7psl7\" (UID: \"89f9bbd9-52c4-4da9-940f-1c6b73caf38a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.746698 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh"] Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.808634 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh"] Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.810193 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.828131 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-zwr6z" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.828216 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.828824 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.893102 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8"] Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.894624 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.922323 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62724f29-4a17-4c89-85aa-060935c8c462-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-bmxdh\" (UID: \"62724f29-4a17-4c89-85aa-060935c8c462\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.922417 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62724f29-4a17-4c89-85aa-060935c8c462-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-bmxdh\" (UID: \"62724f29-4a17-4c89-85aa-060935c8c462\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.922498 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cbdb85b-45e6-4a13-b640-606a0c1d0ebc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-649n8\" (UID: \"4cbdb85b-45e6-4a13-b640-606a0c1d0ebc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.922604 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbdb85b-45e6-4a13-b640-606a0c1d0ebc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-649n8\" (UID: \"4cbdb85b-45e6-4a13-b640-606a0c1d0ebc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" Jan 21 16:14:33 crc kubenswrapper[4834]: I0121 16:14:33.979036 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8"] Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.031387 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62724f29-4a17-4c89-85aa-060935c8c462-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-bmxdh\" (UID: \"62724f29-4a17-4c89-85aa-060935c8c462\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.031431 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62724f29-4a17-4c89-85aa-060935c8c462-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-bmxdh\" (UID: \"62724f29-4a17-4c89-85aa-060935c8c462\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.031463 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cbdb85b-45e6-4a13-b640-606a0c1d0ebc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-649n8\" (UID: \"4cbdb85b-45e6-4a13-b640-606a0c1d0ebc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.031498 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbdb85b-45e6-4a13-b640-606a0c1d0ebc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-649n8\" (UID: \"4cbdb85b-45e6-4a13-b640-606a0c1d0ebc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.052172 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cbdb85b-45e6-4a13-b640-606a0c1d0ebc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-649n8\" (UID: \"4cbdb85b-45e6-4a13-b640-606a0c1d0ebc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.066566 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cbdb85b-45e6-4a13-b640-606a0c1d0ebc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-649n8\" (UID: \"4cbdb85b-45e6-4a13-b640-606a0c1d0ebc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.069671 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/62724f29-4a17-4c89-85aa-060935c8c462-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-bmxdh\" (UID: \"62724f29-4a17-4c89-85aa-060935c8c462\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.080413 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/62724f29-4a17-4c89-85aa-060935c8c462-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-8674496d68-bmxdh\" (UID: \"62724f29-4a17-4c89-85aa-060935c8c462\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.103143 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lwpmg"] Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.104812 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.107659 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-2gncm" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.109117 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.129361 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lwpmg"] Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.187682 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.210831 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-45x98"] Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.214149 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.218344 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-hrnct" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.227605 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-45x98"] Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.241240 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sgmz\" (UniqueName: \"kubernetes.io/projected/30071fcc-7598-4f5d-94a4-af2abfcc9ed3-kube-api-access-4sgmz\") pod \"observability-operator-59bdc8b94-lwpmg\" (UID: \"30071fcc-7598-4f5d-94a4-af2abfcc9ed3\") " pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.241304 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/30071fcc-7598-4f5d-94a4-af2abfcc9ed3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lwpmg\" (UID: \"30071fcc-7598-4f5d-94a4-af2abfcc9ed3\") " pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.241329 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sztc\" (UniqueName: \"kubernetes.io/projected/a53ef277-6aa4-4cdc-b098-4cd5ac373b0e-kube-api-access-8sztc\") pod \"perses-operator-5bf474d74f-45x98\" (UID: \"a53ef277-6aa4-4cdc-b098-4cd5ac373b0e\") " pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.241384 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a53ef277-6aa4-4cdc-b098-4cd5ac373b0e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-45x98\" (UID: \"a53ef277-6aa4-4cdc-b098-4cd5ac373b0e\") " pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.266481 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.347273 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sgmz\" (UniqueName: \"kubernetes.io/projected/30071fcc-7598-4f5d-94a4-af2abfcc9ed3-kube-api-access-4sgmz\") pod \"observability-operator-59bdc8b94-lwpmg\" (UID: \"30071fcc-7598-4f5d-94a4-af2abfcc9ed3\") " pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.347619 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/30071fcc-7598-4f5d-94a4-af2abfcc9ed3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lwpmg\" (UID: \"30071fcc-7598-4f5d-94a4-af2abfcc9ed3\") " pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.347660 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sztc\" (UniqueName: \"kubernetes.io/projected/a53ef277-6aa4-4cdc-b098-4cd5ac373b0e-kube-api-access-8sztc\") pod \"perses-operator-5bf474d74f-45x98\" (UID: \"a53ef277-6aa4-4cdc-b098-4cd5ac373b0e\") " pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.347778 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a53ef277-6aa4-4cdc-b098-4cd5ac373b0e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-45x98\" (UID: \"a53ef277-6aa4-4cdc-b098-4cd5ac373b0e\") " pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.348914 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a53ef277-6aa4-4cdc-b098-4cd5ac373b0e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-45x98\" (UID: \"a53ef277-6aa4-4cdc-b098-4cd5ac373b0e\") " pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.382000 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/30071fcc-7598-4f5d-94a4-af2abfcc9ed3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-lwpmg\" (UID: \"30071fcc-7598-4f5d-94a4-af2abfcc9ed3\") " pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.387454 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sztc\" (UniqueName: \"kubernetes.io/projected/a53ef277-6aa4-4cdc-b098-4cd5ac373b0e-kube-api-access-8sztc\") pod \"perses-operator-5bf474d74f-45x98\" (UID: \"a53ef277-6aa4-4cdc-b098-4cd5ac373b0e\") " pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.395584 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sgmz\" (UniqueName: \"kubernetes.io/projected/30071fcc-7598-4f5d-94a4-af2abfcc9ed3-kube-api-access-4sgmz\") pod \"observability-operator-59bdc8b94-lwpmg\" (UID: \"30071fcc-7598-4f5d-94a4-af2abfcc9ed3\") " pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.509965 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.615746 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:34 crc kubenswrapper[4834]: I0121 16:14:34.835556 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7"] Jan 21 16:14:35 crc kubenswrapper[4834]: I0121 16:14:35.108625 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8"] Jan 21 16:14:35 crc kubenswrapper[4834]: I0121 16:14:35.121412 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh"] Jan 21 16:14:35 crc kubenswrapper[4834]: I0121 16:14:35.164681 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7" event={"ID":"89f9bbd9-52c4-4da9-940f-1c6b73caf38a","Type":"ContainerStarted","Data":"a1a5f82b5fc7428050f773f2922561b54d10ba0ed974c211f6c8f738f79143f5"} Jan 21 16:14:35 crc kubenswrapper[4834]: I0121 16:14:35.173456 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" event={"ID":"62724f29-4a17-4c89-85aa-060935c8c462","Type":"ContainerStarted","Data":"8263022e8c85492f986566ff81abe51c10676fd8c1acd01814628834a7f70986"} Jan 21 16:14:35 crc kubenswrapper[4834]: I0121 16:14:35.176971 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" event={"ID":"4cbdb85b-45e6-4a13-b640-606a0c1d0ebc","Type":"ContainerStarted","Data":"240806a67efa6234f3ffc2b1e8087d2b9a7ceb25e741decb18dc7e980f587486"} Jan 21 16:14:35 crc kubenswrapper[4834]: I0121 16:14:35.322299 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-45x98"] Jan 21 16:14:35 crc kubenswrapper[4834]: I0121 16:14:35.357962 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-lwpmg"] Jan 21 16:14:36 crc kubenswrapper[4834]: I0121 16:14:36.219267 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-45x98" event={"ID":"a53ef277-6aa4-4cdc-b098-4cd5ac373b0e","Type":"ContainerStarted","Data":"dbc49faf1691fba49da823e6e573324a22ad0be24a2cd468dc701d26d1ffc2f7"} Jan 21 16:14:36 crc kubenswrapper[4834]: I0121 16:14:36.240140 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" event={"ID":"30071fcc-7598-4f5d-94a4-af2abfcc9ed3","Type":"ContainerStarted","Data":"3edad6f3fe0ded72f3af8b2ddc88fe6379951ea4129146c1b4ae58d6e0959c72"} Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.405877 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" event={"ID":"62724f29-4a17-4c89-85aa-060935c8c462","Type":"ContainerStarted","Data":"5045a12ecf7f6ef8bc3e7e8d5a6eb1fdd0a427440760e5e7c22644b459341784"} Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.418058 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" event={"ID":"30071fcc-7598-4f5d-94a4-af2abfcc9ed3","Type":"ContainerStarted","Data":"35cb6d60d49aea0c525af0ccc1edd0cae9d773adb37e2231bb50dac90d88f48c"} Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.418826 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.429816 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" event={"ID":"4cbdb85b-45e6-4a13-b640-606a0c1d0ebc","Type":"ContainerStarted","Data":"ec3113e3b7d5eb1c554caeebfd02aaad581d0d434493e65234b92a50c3775d4c"} Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.431134 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.435279 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-45x98" event={"ID":"a53ef277-6aa4-4cdc-b098-4cd5ac373b0e","Type":"ContainerStarted","Data":"3a06bb91654f2a85ab8fb15ae7bc9f9556083ebc3a42700a6ba3c902f7d04653"} Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.436260 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.441941 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7" event={"ID":"89f9bbd9-52c4-4da9-940f-1c6b73caf38a","Type":"ContainerStarted","Data":"da75b55fb5d8a0d95ae64d1a622b746014b8c76f918cb2585db07273171d3be9"} Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.530498 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-649n8" podStartSLOduration=3.111328802 podStartE2EDuration="11.530469943s" podCreationTimestamp="2026-01-21 16:14:33 +0000 UTC" firstStartedPulling="2026-01-21 16:14:35.118835231 +0000 UTC m=+6221.093184276" lastFinishedPulling="2026-01-21 16:14:43.537976372 +0000 UTC m=+6229.512325417" observedRunningTime="2026-01-21 16:14:44.521342398 +0000 UTC m=+6230.495691443" watchObservedRunningTime="2026-01-21 16:14:44.530469943 +0000 UTC m=+6230.504818998" Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.577858 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-lwpmg" podStartSLOduration=3.234416104 podStartE2EDuration="11.577823611s" podCreationTimestamp="2026-01-21 16:14:33 +0000 UTC" firstStartedPulling="2026-01-21 16:14:35.370788104 +0000 UTC m=+6221.345137149" lastFinishedPulling="2026-01-21 16:14:43.714195611 +0000 UTC m=+6229.688544656" observedRunningTime="2026-01-21 16:14:44.562426341 +0000 UTC m=+6230.536775386" watchObservedRunningTime="2026-01-21 16:14:44.577823611 +0000 UTC m=+6230.552172676" Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.592725 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-45x98" podStartSLOduration=2.352003123 podStartE2EDuration="10.592692365s" podCreationTimestamp="2026-01-21 16:14:34 +0000 UTC" firstStartedPulling="2026-01-21 16:14:35.325390927 +0000 UTC m=+6221.299739972" lastFinishedPulling="2026-01-21 16:14:43.566080169 +0000 UTC m=+6229.540429214" observedRunningTime="2026-01-21 16:14:44.59157269 +0000 UTC m=+6230.565921735" watchObservedRunningTime="2026-01-21 16:14:44.592692365 +0000 UTC m=+6230.567041410" Jan 21 16:14:44 crc kubenswrapper[4834]: I0121 16:14:44.630392 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-8674496d68-bmxdh" podStartSLOduration=3.214270325 podStartE2EDuration="11.630373271s" podCreationTimestamp="2026-01-21 16:14:33 +0000 UTC" firstStartedPulling="2026-01-21 16:14:35.121243846 +0000 UTC m=+6221.095592891" lastFinishedPulling="2026-01-21 16:14:43.537346792 +0000 UTC m=+6229.511695837" observedRunningTime="2026-01-21 16:14:44.628758551 +0000 UTC m=+6230.603107596" watchObservedRunningTime="2026-01-21 16:14:44.630373271 +0000 UTC m=+6230.604722316" Jan 21 16:14:54 crc kubenswrapper[4834]: I0121 16:14:54.619755 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-45x98" Jan 21 16:14:54 crc kubenswrapper[4834]: I0121 16:14:54.647980 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-7psl7" podStartSLOduration=12.852419716 podStartE2EDuration="21.647950692s" podCreationTimestamp="2026-01-21 16:14:33 +0000 UTC" firstStartedPulling="2026-01-21 16:14:34.872809614 +0000 UTC m=+6220.847158659" lastFinishedPulling="2026-01-21 16:14:43.66834059 +0000 UTC m=+6229.642689635" observedRunningTime="2026-01-21 16:14:44.668221722 +0000 UTC m=+6230.642570767" watchObservedRunningTime="2026-01-21 16:14:54.647950692 +0000 UTC m=+6240.622299737" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.122701 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.123841 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="2227a23c-0978-4e07-836f-8077d3190e67" containerName="openstackclient" containerID="cri-o://69f2f07ce40fbd5b7ec8f15411465be2eff92a5210ec257f8fe566e9db4d66d8" gracePeriod=2 Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.137222 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.171710 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:14:57 crc kubenswrapper[4834]: E0121 16:14:57.172309 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2227a23c-0978-4e07-836f-8077d3190e67" containerName="openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.172332 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2227a23c-0978-4e07-836f-8077d3190e67" containerName="openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.172528 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2227a23c-0978-4e07-836f-8077d3190e67" containerName="openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.173332 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.184194 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.207605 4834 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="2227a23c-0978-4e07-836f-8077d3190e67" podUID="f7e09cad-3c34-40a0-86d1-18dda726ffd1" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.309487 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7e09cad-3c34-40a0-86d1-18dda726ffd1-openstack-config\") pod \"openstackclient\" (UID: \"f7e09cad-3c34-40a0-86d1-18dda726ffd1\") " pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.309619 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7e09cad-3c34-40a0-86d1-18dda726ffd1-openstack-config-secret\") pod \"openstackclient\" (UID: \"f7e09cad-3c34-40a0-86d1-18dda726ffd1\") " pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.309788 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkqcn\" (UniqueName: \"kubernetes.io/projected/f7e09cad-3c34-40a0-86d1-18dda726ffd1-kube-api-access-fkqcn\") pod \"openstackclient\" (UID: \"f7e09cad-3c34-40a0-86d1-18dda726ffd1\") " pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.411261 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7e09cad-3c34-40a0-86d1-18dda726ffd1-openstack-config-secret\") pod \"openstackclient\" (UID: \"f7e09cad-3c34-40a0-86d1-18dda726ffd1\") " pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.411377 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkqcn\" (UniqueName: \"kubernetes.io/projected/f7e09cad-3c34-40a0-86d1-18dda726ffd1-kube-api-access-fkqcn\") pod \"openstackclient\" (UID: \"f7e09cad-3c34-40a0-86d1-18dda726ffd1\") " pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.411457 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7e09cad-3c34-40a0-86d1-18dda726ffd1-openstack-config\") pod \"openstackclient\" (UID: \"f7e09cad-3c34-40a0-86d1-18dda726ffd1\") " pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.412326 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7e09cad-3c34-40a0-86d1-18dda726ffd1-openstack-config\") pod \"openstackclient\" (UID: \"f7e09cad-3c34-40a0-86d1-18dda726ffd1\") " pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.418858 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7e09cad-3c34-40a0-86d1-18dda726ffd1-openstack-config-secret\") pod \"openstackclient\" (UID: \"f7e09cad-3c34-40a0-86d1-18dda726ffd1\") " pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.457658 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkqcn\" (UniqueName: \"kubernetes.io/projected/f7e09cad-3c34-40a0-86d1-18dda726ffd1-kube-api-access-fkqcn\") pod \"openstackclient\" (UID: \"f7e09cad-3c34-40a0-86d1-18dda726ffd1\") " pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.475477 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.477336 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.483793 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tn25z" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.496486 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.501003 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.618265 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6hg7\" (UniqueName: \"kubernetes.io/projected/3804ae5f-4153-4f31-b533-79d534c7e9a3-kube-api-access-q6hg7\") pod \"kube-state-metrics-0\" (UID: \"3804ae5f-4153-4f31-b533-79d534c7e9a3\") " pod="openstack/kube-state-metrics-0" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.719958 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6hg7\" (UniqueName: \"kubernetes.io/projected/3804ae5f-4153-4f31-b533-79d534c7e9a3-kube-api-access-q6hg7\") pod \"kube-state-metrics-0\" (UID: \"3804ae5f-4153-4f31-b533-79d534c7e9a3\") " pod="openstack/kube-state-metrics-0" Jan 21 16:14:57 crc kubenswrapper[4834]: I0121 16:14:57.843464 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6hg7\" (UniqueName: \"kubernetes.io/projected/3804ae5f-4153-4f31-b533-79d534c7e9a3-kube-api-access-q6hg7\") pod \"kube-state-metrics-0\" (UID: \"3804ae5f-4153-4f31-b533-79d534c7e9a3\") " pod="openstack/kube-state-metrics-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.042584 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.521998 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:14:58 crc kubenswrapper[4834]: W0121 16:14:58.526068 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7e09cad_3c34_40a0_86d1_18dda726ffd1.slice/crio-693ecba1a4d2d08d8372551ba92807c67233d0556c3bb23ed9a408d581b69167 WatchSource:0}: Error finding container 693ecba1a4d2d08d8372551ba92807c67233d0556c3bb23ed9a408d581b69167: Status 404 returned error can't find the container with id 693ecba1a4d2d08d8372551ba92807c67233d0556c3bb23ed9a408d581b69167 Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.557004 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.571675 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.595503 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.610048 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.610129 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-phsbb" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.610318 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.610399 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.610532 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.664431 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f7e09cad-3c34-40a0-86d1-18dda726ffd1","Type":"ContainerStarted","Data":"693ecba1a4d2d08d8372551ba92807c67233d0556c3bb23ed9a408d581b69167"} Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.682806 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.683113 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.683168 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456xh\" (UniqueName: \"kubernetes.io/projected/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-kube-api-access-456xh\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.683234 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.683319 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.683349 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.683384 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.785696 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.786200 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.786237 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-456xh\" (UniqueName: \"kubernetes.io/projected/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-kube-api-access-456xh\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.786271 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.786314 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.786337 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.786359 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.786968 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.791054 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.800959 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.814223 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.814433 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.814800 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.841599 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-456xh\" (UniqueName: \"kubernetes.io/projected/0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d-kube-api-access-456xh\") pod \"alertmanager-metric-storage-0\" (UID: \"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d\") " pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.925880 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.929411 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.931477 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.931718 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-rp6gr" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.932607 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.934955 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.935396 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.936357 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.936501 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.937053 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 16:14:58 crc kubenswrapper[4834]: I0121 16:14:58.969296 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.091739 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102391 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fab2ab9-a313-4fae-a4f9-6180c350a5aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fab2ab9-a313-4fae-a4f9-6180c350a5aa\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102443 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102484 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102515 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j26kk\" (UniqueName: \"kubernetes.io/projected/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-kube-api-access-j26kk\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102575 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102609 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-config\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102677 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102715 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102748 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.102791 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204318 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204658 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204690 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204735 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204815 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fab2ab9-a313-4fae-a4f9-6180c350a5aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fab2ab9-a313-4fae-a4f9-6180c350a5aa\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204836 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204865 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204894 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j26kk\" (UniqueName: \"kubernetes.io/projected/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-kube-api-access-j26kk\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204951 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.204982 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-config\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.213323 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-config\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.213990 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.216243 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.216984 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.217126 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.224528 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.225816 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.226309 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.312417 4834 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.312483 4834 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fab2ab9-a313-4fae-a4f9-6180c350a5aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fab2ab9-a313-4fae-a4f9-6180c350a5aa\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e99a1bf7a86f72b91ef850aceea56b55729afd582d51bc1a5bfe086ac3c15f9d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.314690 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j26kk\" (UniqueName: \"kubernetes.io/projected/f4d8624c-870c-4ec9-bbbe-2cb20ed149df-kube-api-access-j26kk\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.371421 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fab2ab9-a313-4fae-a4f9-6180c350a5aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fab2ab9-a313-4fae-a4f9-6180c350a5aa\") pod \"prometheus-metric-storage-0\" (UID: \"f4d8624c-870c-4ec9-bbbe-2cb20ed149df\") " pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.373343 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:14:59 crc kubenswrapper[4834]: W0121 16:14:59.377289 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3804ae5f_4153_4f31_b533_79d534c7e9a3.slice/crio-26a212e06242d8cf2d50b43859db0cc64ee575d2b6f1d3848c50e2a606aa51ae WatchSource:0}: Error finding container 26a212e06242d8cf2d50b43859db0cc64ee575d2b6f1d3848c50e2a606aa51ae: Status 404 returned error can't find the container with id 26a212e06242d8cf2d50b43859db0cc64ee575d2b6f1d3848c50e2a606aa51ae Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.583284 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.680688 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f7e09cad-3c34-40a0-86d1-18dda726ffd1","Type":"ContainerStarted","Data":"10c1ea140c358ed7621d68f78f5057166936aa318f051b39e4c6e1e8f3f4ebc6"} Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.695283 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3804ae5f-4153-4f31-b533-79d534c7e9a3","Type":"ContainerStarted","Data":"26a212e06242d8cf2d50b43859db0cc64ee575d2b6f1d3848c50e2a606aa51ae"} Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.697043 4834 generic.go:334] "Generic (PLEG): container finished" podID="2227a23c-0978-4e07-836f-8077d3190e67" containerID="69f2f07ce40fbd5b7ec8f15411465be2eff92a5210ec257f8fe566e9db4d66d8" exitCode=137 Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.804001 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.861842 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.861815398 podStartE2EDuration="2.861815398s" podCreationTimestamp="2026-01-21 16:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:14:59.757487222 +0000 UTC m=+6245.731836287" watchObservedRunningTime="2026-01-21 16:14:59.861815398 +0000 UTC m=+6245.836164443" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.873385 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 21 16:14:59 crc kubenswrapper[4834]: W0121 16:14:59.877410 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4d3dce_8ceb_4c4d_883b_bf91e68f5c7d.slice/crio-d3d047637bb1e822b38ceeb4937fb2923d71e10d2a516e1efa87be7f9a3d714f WatchSource:0}: Error finding container d3d047637bb1e822b38ceeb4937fb2923d71e10d2a516e1efa87be7f9a3d714f: Status 404 returned error can't find the container with id d3d047637bb1e822b38ceeb4937fb2923d71e10d2a516e1efa87be7f9a3d714f Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.924049 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffb4l\" (UniqueName: \"kubernetes.io/projected/2227a23c-0978-4e07-836f-8077d3190e67-kube-api-access-ffb4l\") pod \"2227a23c-0978-4e07-836f-8077d3190e67\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.924374 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config\") pod \"2227a23c-0978-4e07-836f-8077d3190e67\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.924670 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config-secret\") pod \"2227a23c-0978-4e07-836f-8077d3190e67\" (UID: \"2227a23c-0978-4e07-836f-8077d3190e67\") " Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.930763 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2227a23c-0978-4e07-836f-8077d3190e67-kube-api-access-ffb4l" (OuterVolumeSpecName: "kube-api-access-ffb4l") pod "2227a23c-0978-4e07-836f-8077d3190e67" (UID: "2227a23c-0978-4e07-836f-8077d3190e67"). InnerVolumeSpecName "kube-api-access-ffb4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.958820 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2227a23c-0978-4e07-836f-8077d3190e67" (UID: "2227a23c-0978-4e07-836f-8077d3190e67"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:14:59 crc kubenswrapper[4834]: I0121 16:14:59.978873 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2227a23c-0978-4e07-836f-8077d3190e67" (UID: "2227a23c-0978-4e07-836f-8077d3190e67"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.028673 4834 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.028726 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffb4l\" (UniqueName: \"kubernetes.io/projected/2227a23c-0978-4e07-836f-8077d3190e67-kube-api-access-ffb4l\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.028745 4834 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2227a23c-0978-4e07-836f-8077d3190e67-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.157215 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj"] Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.160658 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.177692 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.177798 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.212849 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj"] Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.233075 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/594e11d5-6b18-492e-9096-a898326ce42f-config-volume\") pod \"collect-profiles-29483535-2v2sj\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.233172 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj9sj\" (UniqueName: \"kubernetes.io/projected/594e11d5-6b18-492e-9096-a898326ce42f-kube-api-access-qj9sj\") pod \"collect-profiles-29483535-2v2sj\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.233338 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/594e11d5-6b18-492e-9096-a898326ce42f-secret-volume\") pod \"collect-profiles-29483535-2v2sj\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.233469 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.335654 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/594e11d5-6b18-492e-9096-a898326ce42f-config-volume\") pod \"collect-profiles-29483535-2v2sj\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.336044 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj9sj\" (UniqueName: \"kubernetes.io/projected/594e11d5-6b18-492e-9096-a898326ce42f-kube-api-access-qj9sj\") pod \"collect-profiles-29483535-2v2sj\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.336231 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/594e11d5-6b18-492e-9096-a898326ce42f-secret-volume\") pod \"collect-profiles-29483535-2v2sj\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.336631 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/594e11d5-6b18-492e-9096-a898326ce42f-config-volume\") pod \"collect-profiles-29483535-2v2sj\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.337745 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2227a23c-0978-4e07-836f-8077d3190e67" path="/var/lib/kubelet/pods/2227a23c-0978-4e07-836f-8077d3190e67/volumes" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.341355 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/594e11d5-6b18-492e-9096-a898326ce42f-secret-volume\") pod \"collect-profiles-29483535-2v2sj\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.355635 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj9sj\" (UniqueName: \"kubernetes.io/projected/594e11d5-6b18-492e-9096-a898326ce42f-kube-api-access-qj9sj\") pod \"collect-profiles-29483535-2v2sj\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.509204 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.721621 4834 scope.go:117] "RemoveContainer" containerID="69f2f07ce40fbd5b7ec8f15411465be2eff92a5210ec257f8fe566e9db4d66d8" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.722202 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.737828 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f4d8624c-870c-4ec9-bbbe-2cb20ed149df","Type":"ContainerStarted","Data":"6c486155569e25e9fb3aa05db9c14f845644dbc29b3a8da2a4ce7a97a1087e61"} Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.741447 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d","Type":"ContainerStarted","Data":"d3d047637bb1e822b38ceeb4937fb2923d71e10d2a516e1efa87be7f9a3d714f"} Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.748235 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3804ae5f-4153-4f31-b533-79d534c7e9a3","Type":"ContainerStarted","Data":"de8886eed5815fcfb744fdf0863853f3a4617fc478046947394e8b8613db9caa"} Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.748386 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 16:15:00 crc kubenswrapper[4834]: I0121 16:15:00.777192 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.977185978 podStartE2EDuration="3.777167692s" podCreationTimestamp="2026-01-21 16:14:57 +0000 UTC" firstStartedPulling="2026-01-21 16:14:59.382631465 +0000 UTC m=+6245.356980510" lastFinishedPulling="2026-01-21 16:15:00.182613179 +0000 UTC m=+6246.156962224" observedRunningTime="2026-01-21 16:15:00.768694887 +0000 UTC m=+6246.743043922" watchObservedRunningTime="2026-01-21 16:15:00.777167692 +0000 UTC m=+6246.751516737" Jan 21 16:15:01 crc kubenswrapper[4834]: I0121 16:15:01.025504 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj"] Jan 21 16:15:01 crc kubenswrapper[4834]: W0121 16:15:01.049055 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod594e11d5_6b18_492e_9096_a898326ce42f.slice/crio-f901b8767162be48f6f85bee9832e62f4f7550f352aada1b4b174b35fec87718 WatchSource:0}: Error finding container f901b8767162be48f6f85bee9832e62f4f7550f352aada1b4b174b35fec87718: Status 404 returned error can't find the container with id f901b8767162be48f6f85bee9832e62f4f7550f352aada1b4b174b35fec87718 Jan 21 16:15:01 crc kubenswrapper[4834]: I0121 16:15:01.765370 4834 generic.go:334] "Generic (PLEG): container finished" podID="594e11d5-6b18-492e-9096-a898326ce42f" containerID="96b77a1c531feee6a7c85c727ba18d11fff5889fdce6c7f84056303d3bcc128f" exitCode=0 Jan 21 16:15:01 crc kubenswrapper[4834]: I0121 16:15:01.765477 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" event={"ID":"594e11d5-6b18-492e-9096-a898326ce42f","Type":"ContainerDied","Data":"96b77a1c531feee6a7c85c727ba18d11fff5889fdce6c7f84056303d3bcc128f"} Jan 21 16:15:01 crc kubenswrapper[4834]: I0121 16:15:01.765787 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" event={"ID":"594e11d5-6b18-492e-9096-a898326ce42f","Type":"ContainerStarted","Data":"f901b8767162be48f6f85bee9832e62f4f7550f352aada1b4b174b35fec87718"} Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.313767 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.419851 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/594e11d5-6b18-492e-9096-a898326ce42f-config-volume\") pod \"594e11d5-6b18-492e-9096-a898326ce42f\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.420056 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj9sj\" (UniqueName: \"kubernetes.io/projected/594e11d5-6b18-492e-9096-a898326ce42f-kube-api-access-qj9sj\") pod \"594e11d5-6b18-492e-9096-a898326ce42f\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.420289 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/594e11d5-6b18-492e-9096-a898326ce42f-secret-volume\") pod \"594e11d5-6b18-492e-9096-a898326ce42f\" (UID: \"594e11d5-6b18-492e-9096-a898326ce42f\") " Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.421893 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594e11d5-6b18-492e-9096-a898326ce42f-config-volume" (OuterVolumeSpecName: "config-volume") pod "594e11d5-6b18-492e-9096-a898326ce42f" (UID: "594e11d5-6b18-492e-9096-a898326ce42f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.427653 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/594e11d5-6b18-492e-9096-a898326ce42f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "594e11d5-6b18-492e-9096-a898326ce42f" (UID: "594e11d5-6b18-492e-9096-a898326ce42f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.428644 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594e11d5-6b18-492e-9096-a898326ce42f-kube-api-access-qj9sj" (OuterVolumeSpecName: "kube-api-access-qj9sj") pod "594e11d5-6b18-492e-9096-a898326ce42f" (UID: "594e11d5-6b18-492e-9096-a898326ce42f"). InnerVolumeSpecName "kube-api-access-qj9sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.523664 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj9sj\" (UniqueName: \"kubernetes.io/projected/594e11d5-6b18-492e-9096-a898326ce42f-kube-api-access-qj9sj\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.523720 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/594e11d5-6b18-492e-9096-a898326ce42f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.523733 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/594e11d5-6b18-492e-9096-a898326ce42f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.833645 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" event={"ID":"594e11d5-6b18-492e-9096-a898326ce42f","Type":"ContainerDied","Data":"f901b8767162be48f6f85bee9832e62f4f7550f352aada1b4b174b35fec87718"} Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.833972 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f901b8767162be48f6f85bee9832e62f4f7550f352aada1b4b174b35fec87718" Jan 21 16:15:03 crc kubenswrapper[4834]: I0121 16:15:03.833737 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj" Jan 21 16:15:04 crc kubenswrapper[4834]: I0121 16:15:04.411092 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn"] Jan 21 16:15:04 crc kubenswrapper[4834]: I0121 16:15:04.420410 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483490-8b9nn"] Jan 21 16:15:06 crc kubenswrapper[4834]: I0121 16:15:06.342252 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc347f98-a93c-4fd4-9aeb-1bcc58992509" path="/var/lib/kubelet/pods/cc347f98-a93c-4fd4-9aeb-1bcc58992509/volumes" Jan 21 16:15:06 crc kubenswrapper[4834]: I0121 16:15:06.864975 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f4d8624c-870c-4ec9-bbbe-2cb20ed149df","Type":"ContainerStarted","Data":"7b646e3fac099ead8b079d8098c027eb563fa9fd76baf4868a1e574692e41803"} Jan 21 16:15:06 crc kubenswrapper[4834]: I0121 16:15:06.866830 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d","Type":"ContainerStarted","Data":"5654fe906a385ac6cb7add6d7d538cec02c69a6003f31915fb1844f8c67d9e64"} Jan 21 16:15:07 crc kubenswrapper[4834]: I0121 16:15:07.051651 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8580-account-create-update-qk6lj"] Jan 21 16:15:07 crc kubenswrapper[4834]: I0121 16:15:07.065114 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-fsw9d"] Jan 21 16:15:07 crc kubenswrapper[4834]: I0121 16:15:07.075507 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8580-account-create-update-qk6lj"] Jan 21 16:15:07 crc kubenswrapper[4834]: I0121 16:15:07.084078 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-fsw9d"] Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.041612 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0f3c-account-create-update-zj7d8"] Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.052906 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-29c8-account-create-update-pmcx2"] Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.058274 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.065307 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7cmlv"] Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.076339 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wkcxw"] Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.087558 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7cmlv"] Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.097548 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0f3c-account-create-update-zj7d8"] Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.119117 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-29c8-account-create-update-pmcx2"] Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.132743 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wkcxw"] Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.340850 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139d4606-a92c-4c83-9291-9aa71c0715e1" path="/var/lib/kubelet/pods/139d4606-a92c-4c83-9291-9aa71c0715e1/volumes" Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.341797 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b6fba8-c961-4b52-b97f-8189df4a5339" path="/var/lib/kubelet/pods/18b6fba8-c961-4b52-b97f-8189df4a5339/volumes" Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.342701 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3f2378-7abe-45a0-9099-b1f299c2df01" path="/var/lib/kubelet/pods/3e3f2378-7abe-45a0-9099-b1f299c2df01/volumes" Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.343401 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc8138c-cb56-4072-a2cd-709f7feee5af" path="/var/lib/kubelet/pods/3fc8138c-cb56-4072-a2cd-709f7feee5af/volumes" Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.345244 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7a8005-894f-45b0-99bd-50bb7be1be48" path="/var/lib/kubelet/pods/ab7a8005-894f-45b0-99bd-50bb7be1be48/volumes" Jan 21 16:15:08 crc kubenswrapper[4834]: I0121 16:15:08.346319 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac0977e-6a7b-4218-bb8d-409dd7c4732e" path="/var/lib/kubelet/pods/dac0977e-6a7b-4218-bb8d-409dd7c4732e/volumes" Jan 21 16:15:10 crc kubenswrapper[4834]: I0121 16:15:10.226600 4834 scope.go:117] "RemoveContainer" containerID="3721e6df56f79c815569579c3843194163e62b497302c8c695d733c1a7382835" Jan 21 16:15:10 crc kubenswrapper[4834]: I0121 16:15:10.267907 4834 scope.go:117] "RemoveContainer" containerID="df8dddb87bfaa7e48780b1ed47943d85bc9046aa18825f30c8e3d4e55174f84a" Jan 21 16:15:10 crc kubenswrapper[4834]: I0121 16:15:10.311681 4834 scope.go:117] "RemoveContainer" containerID="813d4685cd37b2c6d05baf31f97e3b030923410e8491a86df47d3bc90890a1a9" Jan 21 16:15:10 crc kubenswrapper[4834]: I0121 16:15:10.379548 4834 scope.go:117] "RemoveContainer" containerID="d1da306fecbc7856a5db3cbf8d2b86808f604e286b30d768798a78674018c731" Jan 21 16:15:10 crc kubenswrapper[4834]: I0121 16:15:10.416354 4834 scope.go:117] "RemoveContainer" containerID="2cb8b628d3c0c415157e7f679a10b1cddc63517a06df551807c234901c5edfd0" Jan 21 16:15:10 crc kubenswrapper[4834]: I0121 16:15:10.483170 4834 scope.go:117] "RemoveContainer" containerID="dccabb73344b6d4b4832eee08f46cd0974fb2571dd940533e757d10af8253300" Jan 21 16:15:10 crc kubenswrapper[4834]: I0121 16:15:10.515470 4834 scope.go:117] "RemoveContainer" containerID="8da0c035eba0f37fbf36b2f9b877617e33bbf2afa4bd84c302ffdc13b7e5c96c" Jan 21 16:15:10 crc kubenswrapper[4834]: I0121 16:15:10.540567 4834 scope.go:117] "RemoveContainer" containerID="c284d50a2b5f867a9fef785cb6f1595b0dae76ce230609fe5e58bbc66d872e79" Jan 21 16:15:12 crc kubenswrapper[4834]: I0121 16:15:12.931866 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d","Type":"ContainerDied","Data":"5654fe906a385ac6cb7add6d7d538cec02c69a6003f31915fb1844f8c67d9e64"} Jan 21 16:15:12 crc kubenswrapper[4834]: I0121 16:15:12.933113 4834 generic.go:334] "Generic (PLEG): container finished" podID="0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d" containerID="5654fe906a385ac6cb7add6d7d538cec02c69a6003f31915fb1844f8c67d9e64" exitCode=0 Jan 21 16:15:13 crc kubenswrapper[4834]: I0121 16:15:13.944370 4834 generic.go:334] "Generic (PLEG): container finished" podID="f4d8624c-870c-4ec9-bbbe-2cb20ed149df" containerID="7b646e3fac099ead8b079d8098c027eb563fa9fd76baf4868a1e574692e41803" exitCode=0 Jan 21 16:15:13 crc kubenswrapper[4834]: I0121 16:15:13.944471 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f4d8624c-870c-4ec9-bbbe-2cb20ed149df","Type":"ContainerDied","Data":"7b646e3fac099ead8b079d8098c027eb563fa9fd76baf4868a1e574692e41803"} Jan 21 16:15:17 crc kubenswrapper[4834]: I0121 16:15:17.006760 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d","Type":"ContainerStarted","Data":"92cf31021a776f419d906f7aade32d0c42c84ee6b1aac708cd46f8f2f47fb672"} Jan 21 16:15:20 crc kubenswrapper[4834]: I0121 16:15:20.039495 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d","Type":"ContainerStarted","Data":"57abdfe638d28a55817a92dccfbc274422594d7842014d582b8aa3645109a46a"} Jan 21 16:15:20 crc kubenswrapper[4834]: I0121 16:15:20.039824 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:15:20 crc kubenswrapper[4834]: I0121 16:15:20.046026 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 21 16:15:20 crc kubenswrapper[4834]: I0121 16:15:20.075916 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.908303523 podStartE2EDuration="22.075890512s" podCreationTimestamp="2026-01-21 16:14:58 +0000 UTC" firstStartedPulling="2026-01-21 16:14:59.880501721 +0000 UTC m=+6245.854850766" lastFinishedPulling="2026-01-21 16:15:16.04808871 +0000 UTC m=+6262.022437755" observedRunningTime="2026-01-21 16:15:20.066447818 +0000 UTC m=+6266.040796883" watchObservedRunningTime="2026-01-21 16:15:20.075890512 +0000 UTC m=+6266.050239547" Jan 21 16:15:21 crc kubenswrapper[4834]: I0121 16:15:21.050897 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f4d8624c-870c-4ec9-bbbe-2cb20ed149df","Type":"ContainerStarted","Data":"c341c3de5b1394ee58fd6994a6ace3905787a6e3d24a855cbc8f631c4308f882"} Jan 21 16:15:22 crc kubenswrapper[4834]: I0121 16:15:22.036856 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rpx4c"] Jan 21 16:15:22 crc kubenswrapper[4834]: I0121 16:15:22.045410 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rpx4c"] Jan 21 16:15:22 crc kubenswrapper[4834]: I0121 16:15:22.387146 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ffe6c6-2260-4d10-9ffb-7a3bc7286f58" path="/var/lib/kubelet/pods/83ffe6c6-2260-4d10-9ffb-7a3bc7286f58/volumes" Jan 21 16:15:25 crc kubenswrapper[4834]: I0121 16:15:25.107417 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f4d8624c-870c-4ec9-bbbe-2cb20ed149df","Type":"ContainerStarted","Data":"f3ee2cbcca41ff5ce453a7984150d88de13b3a00eff8c2aae812b92a66053902"} Jan 21 16:15:31 crc kubenswrapper[4834]: I0121 16:15:31.170244 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f4d8624c-870c-4ec9-bbbe-2cb20ed149df","Type":"ContainerStarted","Data":"82fe388c9884ca4c319e3ede370f8954b01e4533df61f2c837698f4b21e0874e"} Jan 21 16:15:31 crc kubenswrapper[4834]: I0121 16:15:31.196258 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.814197147 podStartE2EDuration="34.196232377s" podCreationTimestamp="2026-01-21 16:14:57 +0000 UTC" firstStartedPulling="2026-01-21 16:15:00.219136918 +0000 UTC m=+6246.193485963" lastFinishedPulling="2026-01-21 16:15:30.601172148 +0000 UTC m=+6276.575521193" observedRunningTime="2026-01-21 16:15:31.190761877 +0000 UTC m=+6277.165110922" watchObservedRunningTime="2026-01-21 16:15:31.196232377 +0000 UTC m=+6277.170581422" Jan 21 16:15:34 crc kubenswrapper[4834]: I0121 16:15:34.584244 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 16:15:35 crc kubenswrapper[4834]: I0121 16:15:35.036415 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2pbrd"] Jan 21 16:15:35 crc kubenswrapper[4834]: I0121 16:15:35.050790 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2pbrd"] Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.055220 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-f57q8"] Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.064582 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-f57q8"] Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.350582 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea" path="/var/lib/kubelet/pods/2b2c4ace-cc51-4bb3-a8c8-3b44bf7189ea/volumes" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.352357 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65a3e27-7598-4756-b6d1-7a8a9ad74bec" path="/var/lib/kubelet/pods/b65a3e27-7598-4756-b6d1-7a8a9ad74bec/volumes" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.745433 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:15:36 crc kubenswrapper[4834]: E0121 16:15:36.745909 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594e11d5-6b18-492e-9096-a898326ce42f" containerName="collect-profiles" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.745943 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="594e11d5-6b18-492e-9096-a898326ce42f" containerName="collect-profiles" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.746179 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="594e11d5-6b18-492e-9096-a898326ce42f" containerName="collect-profiles" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.748187 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.751453 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.751505 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.767876 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.856692 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-log-httpd\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.856769 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg5x4\" (UniqueName: \"kubernetes.io/projected/f19274f0-6d4c-49a8-91bc-550f1b4c7589-kube-api-access-mg5x4\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.856791 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.856878 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-run-httpd\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.856962 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-config-data\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.856976 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.857018 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-scripts\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.959135 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg5x4\" (UniqueName: \"kubernetes.io/projected/f19274f0-6d4c-49a8-91bc-550f1b4c7589-kube-api-access-mg5x4\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.959186 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.960605 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-run-httpd\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.960723 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.960742 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-config-data\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.960790 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-scripts\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.960946 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-log-httpd\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.961195 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-run-httpd\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.961489 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-log-httpd\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.966348 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.968782 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-scripts\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.969442 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.971399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-config-data\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:36 crc kubenswrapper[4834]: I0121 16:15:36.977769 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg5x4\" (UniqueName: \"kubernetes.io/projected/f19274f0-6d4c-49a8-91bc-550f1b4c7589-kube-api-access-mg5x4\") pod \"ceilometer-0\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " pod="openstack/ceilometer-0" Jan 21 16:15:37 crc kubenswrapper[4834]: I0121 16:15:37.077054 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:15:37 crc kubenswrapper[4834]: I0121 16:15:37.618162 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:15:38 crc kubenswrapper[4834]: I0121 16:15:38.241865 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerStarted","Data":"1d59bcaca0f2c7d29409f43cd953ffd785ed18183cc835932b00d19b42491354"} Jan 21 16:15:39 crc kubenswrapper[4834]: I0121 16:15:39.252491 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerStarted","Data":"73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c"} Jan 21 16:15:40 crc kubenswrapper[4834]: I0121 16:15:40.265134 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerStarted","Data":"20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700"} Jan 21 16:15:41 crc kubenswrapper[4834]: I0121 16:15:41.277268 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerStarted","Data":"5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0"} Jan 21 16:15:42 crc kubenswrapper[4834]: I0121 16:15:42.288829 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerStarted","Data":"8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37"} Jan 21 16:15:42 crc kubenswrapper[4834]: I0121 16:15:42.289265 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:15:42 crc kubenswrapper[4834]: I0121 16:15:42.318496 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.208125521 podStartE2EDuration="6.318467599s" podCreationTimestamp="2026-01-21 16:15:36 +0000 UTC" firstStartedPulling="2026-01-21 16:15:37.631505997 +0000 UTC m=+6283.605855042" lastFinishedPulling="2026-01-21 16:15:41.741848075 +0000 UTC m=+6287.716197120" observedRunningTime="2026-01-21 16:15:42.308298182 +0000 UTC m=+6288.282647227" watchObservedRunningTime="2026-01-21 16:15:42.318467599 +0000 UTC m=+6288.292816644" Jan 21 16:15:44 crc kubenswrapper[4834]: I0121 16:15:44.583498 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 16:15:44 crc kubenswrapper[4834]: I0121 16:15:44.586291 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 16:15:45 crc kubenswrapper[4834]: I0121 16:15:45.321052 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.294579 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-xtvl5"] Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.301023 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.307678 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xtvl5"] Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.434450 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4nt6\" (UniqueName: \"kubernetes.io/projected/dbab3f15-8f0e-4213-9c94-75068cd1502d-kube-api-access-v4nt6\") pod \"aodh-db-create-xtvl5\" (UID: \"dbab3f15-8f0e-4213-9c94-75068cd1502d\") " pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.434511 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbab3f15-8f0e-4213-9c94-75068cd1502d-operator-scripts\") pod \"aodh-db-create-xtvl5\" (UID: \"dbab3f15-8f0e-4213-9c94-75068cd1502d\") " pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.500629 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-2a0e-account-create-update-2pmzh"] Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.502575 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.508592 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.516451 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2a0e-account-create-update-2pmzh"] Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.538074 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4nt6\" (UniqueName: \"kubernetes.io/projected/dbab3f15-8f0e-4213-9c94-75068cd1502d-kube-api-access-v4nt6\") pod \"aodh-db-create-xtvl5\" (UID: \"dbab3f15-8f0e-4213-9c94-75068cd1502d\") " pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.538198 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbab3f15-8f0e-4213-9c94-75068cd1502d-operator-scripts\") pod \"aodh-db-create-xtvl5\" (UID: \"dbab3f15-8f0e-4213-9c94-75068cd1502d\") " pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.540654 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbab3f15-8f0e-4213-9c94-75068cd1502d-operator-scripts\") pod \"aodh-db-create-xtvl5\" (UID: \"dbab3f15-8f0e-4213-9c94-75068cd1502d\") " pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.568548 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4nt6\" (UniqueName: \"kubernetes.io/projected/dbab3f15-8f0e-4213-9c94-75068cd1502d-kube-api-access-v4nt6\") pod \"aodh-db-create-xtvl5\" (UID: \"dbab3f15-8f0e-4213-9c94-75068cd1502d\") " pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.626227 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.640515 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1acf71cc-5311-4be5-9499-3eb5c42ac831-operator-scripts\") pod \"aodh-2a0e-account-create-update-2pmzh\" (UID: \"1acf71cc-5311-4be5-9499-3eb5c42ac831\") " pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.640794 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kz6m\" (UniqueName: \"kubernetes.io/projected/1acf71cc-5311-4be5-9499-3eb5c42ac831-kube-api-access-4kz6m\") pod \"aodh-2a0e-account-create-update-2pmzh\" (UID: \"1acf71cc-5311-4be5-9499-3eb5c42ac831\") " pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.742813 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kz6m\" (UniqueName: \"kubernetes.io/projected/1acf71cc-5311-4be5-9499-3eb5c42ac831-kube-api-access-4kz6m\") pod \"aodh-2a0e-account-create-update-2pmzh\" (UID: \"1acf71cc-5311-4be5-9499-3eb5c42ac831\") " pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.743039 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1acf71cc-5311-4be5-9499-3eb5c42ac831-operator-scripts\") pod \"aodh-2a0e-account-create-update-2pmzh\" (UID: \"1acf71cc-5311-4be5-9499-3eb5c42ac831\") " pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.744134 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1acf71cc-5311-4be5-9499-3eb5c42ac831-operator-scripts\") pod \"aodh-2a0e-account-create-update-2pmzh\" (UID: \"1acf71cc-5311-4be5-9499-3eb5c42ac831\") " pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.772387 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kz6m\" (UniqueName: \"kubernetes.io/projected/1acf71cc-5311-4be5-9499-3eb5c42ac831-kube-api-access-4kz6m\") pod \"aodh-2a0e-account-create-update-2pmzh\" (UID: \"1acf71cc-5311-4be5-9499-3eb5c42ac831\") " pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:48 crc kubenswrapper[4834]: I0121 16:15:48.839434 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:49 crc kubenswrapper[4834]: I0121 16:15:49.219445 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xtvl5"] Jan 21 16:15:49 crc kubenswrapper[4834]: I0121 16:15:49.366971 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xtvl5" event={"ID":"dbab3f15-8f0e-4213-9c94-75068cd1502d","Type":"ContainerStarted","Data":"b4bccecdf5dadd9c9e940deb4902e178a162bd14d167263138de33d59e823869"} Jan 21 16:15:49 crc kubenswrapper[4834]: I0121 16:15:49.421440 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2a0e-account-create-update-2pmzh"] Jan 21 16:15:49 crc kubenswrapper[4834]: W0121 16:15:49.425241 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1acf71cc_5311_4be5_9499_3eb5c42ac831.slice/crio-95efe3bec8dde1411a2d90a7af026a66fe534ed0c83e30612a11b3c3eddf6049 WatchSource:0}: Error finding container 95efe3bec8dde1411a2d90a7af026a66fe534ed0c83e30612a11b3c3eddf6049: Status 404 returned error can't find the container with id 95efe3bec8dde1411a2d90a7af026a66fe534ed0c83e30612a11b3c3eddf6049 Jan 21 16:15:50 crc kubenswrapper[4834]: I0121 16:15:50.050909 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4899h"] Jan 21 16:15:50 crc kubenswrapper[4834]: I0121 16:15:50.064849 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4899h"] Jan 21 16:15:50 crc kubenswrapper[4834]: I0121 16:15:50.337660 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51eb0eb-3a6b-4022-918d-ace6fad9a93e" path="/var/lib/kubelet/pods/d51eb0eb-3a6b-4022-918d-ace6fad9a93e/volumes" Jan 21 16:15:50 crc kubenswrapper[4834]: I0121 16:15:50.378128 4834 generic.go:334] "Generic (PLEG): container finished" podID="dbab3f15-8f0e-4213-9c94-75068cd1502d" containerID="9ab268761ea405d6875465a646c34c8fb600432bd7c6d6971364760179237810" exitCode=0 Jan 21 16:15:50 crc kubenswrapper[4834]: I0121 16:15:50.378228 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xtvl5" event={"ID":"dbab3f15-8f0e-4213-9c94-75068cd1502d","Type":"ContainerDied","Data":"9ab268761ea405d6875465a646c34c8fb600432bd7c6d6971364760179237810"} Jan 21 16:15:50 crc kubenswrapper[4834]: I0121 16:15:50.380123 4834 generic.go:334] "Generic (PLEG): container finished" podID="1acf71cc-5311-4be5-9499-3eb5c42ac831" containerID="9b8c368dc48e3799aef0f3a6b4eeedf3a4ddf18603eec5afe92a0e49e46cb988" exitCode=0 Jan 21 16:15:50 crc kubenswrapper[4834]: I0121 16:15:50.380168 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2a0e-account-create-update-2pmzh" event={"ID":"1acf71cc-5311-4be5-9499-3eb5c42ac831","Type":"ContainerDied","Data":"9b8c368dc48e3799aef0f3a6b4eeedf3a4ddf18603eec5afe92a0e49e46cb988"} Jan 21 16:15:50 crc kubenswrapper[4834]: I0121 16:15:50.380199 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2a0e-account-create-update-2pmzh" event={"ID":"1acf71cc-5311-4be5-9499-3eb5c42ac831","Type":"ContainerStarted","Data":"95efe3bec8dde1411a2d90a7af026a66fe534ed0c83e30612a11b3c3eddf6049"} Jan 21 16:15:51 crc kubenswrapper[4834]: I0121 16:15:51.829067 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:51 crc kubenswrapper[4834]: I0121 16:15:51.914891 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1acf71cc-5311-4be5-9499-3eb5c42ac831-operator-scripts\") pod \"1acf71cc-5311-4be5-9499-3eb5c42ac831\" (UID: \"1acf71cc-5311-4be5-9499-3eb5c42ac831\") " Jan 21 16:15:51 crc kubenswrapper[4834]: I0121 16:15:51.915015 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kz6m\" (UniqueName: \"kubernetes.io/projected/1acf71cc-5311-4be5-9499-3eb5c42ac831-kube-api-access-4kz6m\") pod \"1acf71cc-5311-4be5-9499-3eb5c42ac831\" (UID: \"1acf71cc-5311-4be5-9499-3eb5c42ac831\") " Jan 21 16:15:51 crc kubenswrapper[4834]: I0121 16:15:51.916542 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1acf71cc-5311-4be5-9499-3eb5c42ac831-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1acf71cc-5311-4be5-9499-3eb5c42ac831" (UID: "1acf71cc-5311-4be5-9499-3eb5c42ac831"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:51 crc kubenswrapper[4834]: I0121 16:15:51.922144 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1acf71cc-5311-4be5-9499-3eb5c42ac831-kube-api-access-4kz6m" (OuterVolumeSpecName: "kube-api-access-4kz6m") pod "1acf71cc-5311-4be5-9499-3eb5c42ac831" (UID: "1acf71cc-5311-4be5-9499-3eb5c42ac831"). InnerVolumeSpecName "kube-api-access-4kz6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.002487 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.017481 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1acf71cc-5311-4be5-9499-3eb5c42ac831-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.017553 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kz6m\" (UniqueName: \"kubernetes.io/projected/1acf71cc-5311-4be5-9499-3eb5c42ac831-kube-api-access-4kz6m\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.119705 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbab3f15-8f0e-4213-9c94-75068cd1502d-operator-scripts\") pod \"dbab3f15-8f0e-4213-9c94-75068cd1502d\" (UID: \"dbab3f15-8f0e-4213-9c94-75068cd1502d\") " Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.120025 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4nt6\" (UniqueName: \"kubernetes.io/projected/dbab3f15-8f0e-4213-9c94-75068cd1502d-kube-api-access-v4nt6\") pod \"dbab3f15-8f0e-4213-9c94-75068cd1502d\" (UID: \"dbab3f15-8f0e-4213-9c94-75068cd1502d\") " Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.120310 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbab3f15-8f0e-4213-9c94-75068cd1502d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbab3f15-8f0e-4213-9c94-75068cd1502d" (UID: "dbab3f15-8f0e-4213-9c94-75068cd1502d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.121175 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbab3f15-8f0e-4213-9c94-75068cd1502d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.123439 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbab3f15-8f0e-4213-9c94-75068cd1502d-kube-api-access-v4nt6" (OuterVolumeSpecName: "kube-api-access-v4nt6") pod "dbab3f15-8f0e-4213-9c94-75068cd1502d" (UID: "dbab3f15-8f0e-4213-9c94-75068cd1502d"). InnerVolumeSpecName "kube-api-access-v4nt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.223788 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4nt6\" (UniqueName: \"kubernetes.io/projected/dbab3f15-8f0e-4213-9c94-75068cd1502d-kube-api-access-v4nt6\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.399239 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xtvl5" event={"ID":"dbab3f15-8f0e-4213-9c94-75068cd1502d","Type":"ContainerDied","Data":"b4bccecdf5dadd9c9e940deb4902e178a162bd14d167263138de33d59e823869"} Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.399283 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4bccecdf5dadd9c9e940deb4902e178a162bd14d167263138de33d59e823869" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.399318 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xtvl5" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.403159 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2a0e-account-create-update-2pmzh" event={"ID":"1acf71cc-5311-4be5-9499-3eb5c42ac831","Type":"ContainerDied","Data":"95efe3bec8dde1411a2d90a7af026a66fe534ed0c83e30612a11b3c3eddf6049"} Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.403190 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95efe3bec8dde1411a2d90a7af026a66fe534ed0c83e30612a11b3c3eddf6049" Jan 21 16:15:52 crc kubenswrapper[4834]: I0121 16:15:52.403270 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2a0e-account-create-update-2pmzh" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.857429 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-4s6l2"] Jan 21 16:15:53 crc kubenswrapper[4834]: E0121 16:15:53.858350 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbab3f15-8f0e-4213-9c94-75068cd1502d" containerName="mariadb-database-create" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.858369 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbab3f15-8f0e-4213-9c94-75068cd1502d" containerName="mariadb-database-create" Jan 21 16:15:53 crc kubenswrapper[4834]: E0121 16:15:53.858399 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1acf71cc-5311-4be5-9499-3eb5c42ac831" containerName="mariadb-account-create-update" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.858408 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1acf71cc-5311-4be5-9499-3eb5c42ac831" containerName="mariadb-account-create-update" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.858708 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="1acf71cc-5311-4be5-9499-3eb5c42ac831" containerName="mariadb-account-create-update" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.858756 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbab3f15-8f0e-4213-9c94-75068cd1502d" containerName="mariadb-database-create" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.859946 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.861870 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.862091 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-lqx7b" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.863078 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.863701 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.872063 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-4s6l2"] Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.961895 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98lqw\" (UniqueName: \"kubernetes.io/projected/2a17e206-cd5a-43f0-b515-941e6664a63e-kube-api-access-98lqw\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.962076 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-scripts\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.962142 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-config-data\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:53 crc kubenswrapper[4834]: I0121 16:15:53.962209 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-combined-ca-bundle\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.064740 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98lqw\" (UniqueName: \"kubernetes.io/projected/2a17e206-cd5a-43f0-b515-941e6664a63e-kube-api-access-98lqw\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.064811 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-scripts\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.064844 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-config-data\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.064879 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-combined-ca-bundle\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.071177 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-combined-ca-bundle\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.071191 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-scripts\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.088702 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98lqw\" (UniqueName: \"kubernetes.io/projected/2a17e206-cd5a-43f0-b515-941e6664a63e-kube-api-access-98lqw\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.089372 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-config-data\") pod \"aodh-db-sync-4s6l2\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.183238 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:15:54 crc kubenswrapper[4834]: I0121 16:15:54.681514 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-4s6l2"] Jan 21 16:15:55 crc kubenswrapper[4834]: I0121 16:15:55.435352 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4s6l2" event={"ID":"2a17e206-cd5a-43f0-b515-941e6664a63e","Type":"ContainerStarted","Data":"4dc5245ad577ba5169b26f4f033dcd3278132caa4cf1b92fe1b35f82eefa1f4c"} Jan 21 16:15:59 crc kubenswrapper[4834]: I0121 16:15:59.417165 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:16:00 crc kubenswrapper[4834]: I0121 16:16:00.486493 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4s6l2" event={"ID":"2a17e206-cd5a-43f0-b515-941e6664a63e","Type":"ContainerStarted","Data":"258b12302bf0faa45a35215d8f5a279e9ee3bb763f0bceed1f7b5cf83b673856"} Jan 21 16:16:00 crc kubenswrapper[4834]: I0121 16:16:00.515688 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-4s6l2" podStartSLOduration=2.792064141 podStartE2EDuration="7.515668836s" podCreationTimestamp="2026-01-21 16:15:53 +0000 UTC" firstStartedPulling="2026-01-21 16:15:54.686287214 +0000 UTC m=+6300.660636259" lastFinishedPulling="2026-01-21 16:15:59.409891909 +0000 UTC m=+6305.384240954" observedRunningTime="2026-01-21 16:16:00.512320883 +0000 UTC m=+6306.486669938" watchObservedRunningTime="2026-01-21 16:16:00.515668836 +0000 UTC m=+6306.490017881" Jan 21 16:16:02 crc kubenswrapper[4834]: I0121 16:16:02.511832 4834 generic.go:334] "Generic (PLEG): container finished" podID="2a17e206-cd5a-43f0-b515-941e6664a63e" containerID="258b12302bf0faa45a35215d8f5a279e9ee3bb763f0bceed1f7b5cf83b673856" exitCode=0 Jan 21 16:16:02 crc kubenswrapper[4834]: I0121 16:16:02.511912 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4s6l2" event={"ID":"2a17e206-cd5a-43f0-b515-941e6664a63e","Type":"ContainerDied","Data":"258b12302bf0faa45a35215d8f5a279e9ee3bb763f0bceed1f7b5cf83b673856"} Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.032055 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.184765 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-config-data\") pod \"2a17e206-cd5a-43f0-b515-941e6664a63e\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.185386 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98lqw\" (UniqueName: \"kubernetes.io/projected/2a17e206-cd5a-43f0-b515-941e6664a63e-kube-api-access-98lqw\") pod \"2a17e206-cd5a-43f0-b515-941e6664a63e\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.185513 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-scripts\") pod \"2a17e206-cd5a-43f0-b515-941e6664a63e\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.185593 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-combined-ca-bundle\") pod \"2a17e206-cd5a-43f0-b515-941e6664a63e\" (UID: \"2a17e206-cd5a-43f0-b515-941e6664a63e\") " Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.193689 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-scripts" (OuterVolumeSpecName: "scripts") pod "2a17e206-cd5a-43f0-b515-941e6664a63e" (UID: "2a17e206-cd5a-43f0-b515-941e6664a63e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.194722 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a17e206-cd5a-43f0-b515-941e6664a63e-kube-api-access-98lqw" (OuterVolumeSpecName: "kube-api-access-98lqw") pod "2a17e206-cd5a-43f0-b515-941e6664a63e" (UID: "2a17e206-cd5a-43f0-b515-941e6664a63e"). InnerVolumeSpecName "kube-api-access-98lqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.215194 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a17e206-cd5a-43f0-b515-941e6664a63e" (UID: "2a17e206-cd5a-43f0-b515-941e6664a63e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.215701 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-config-data" (OuterVolumeSpecName: "config-data") pod "2a17e206-cd5a-43f0-b515-941e6664a63e" (UID: "2a17e206-cd5a-43f0-b515-941e6664a63e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.288367 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.288419 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98lqw\" (UniqueName: \"kubernetes.io/projected/2a17e206-cd5a-43f0-b515-941e6664a63e-kube-api-access-98lqw\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.288432 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.288442 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a17e206-cd5a-43f0-b515-941e6664a63e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.554577 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4s6l2" event={"ID":"2a17e206-cd5a-43f0-b515-941e6664a63e","Type":"ContainerDied","Data":"4dc5245ad577ba5169b26f4f033dcd3278132caa4cf1b92fe1b35f82eefa1f4c"} Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.554707 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dc5245ad577ba5169b26f4f033dcd3278132caa4cf1b92fe1b35f82eefa1f4c" Jan 21 16:16:04 crc kubenswrapper[4834]: I0121 16:16:04.554790 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4s6l2" Jan 21 16:16:07 crc kubenswrapper[4834]: I0121 16:16:07.085327 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:16:08 crc kubenswrapper[4834]: I0121 16:16:08.950512 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 21 16:16:08 crc kubenswrapper[4834]: E0121 16:16:08.956854 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a17e206-cd5a-43f0-b515-941e6664a63e" containerName="aodh-db-sync" Jan 21 16:16:08 crc kubenswrapper[4834]: I0121 16:16:08.956879 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a17e206-cd5a-43f0-b515-941e6664a63e" containerName="aodh-db-sync" Jan 21 16:16:08 crc kubenswrapper[4834]: I0121 16:16:08.957233 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a17e206-cd5a-43f0-b515-941e6664a63e" containerName="aodh-db-sync" Jan 21 16:16:08 crc kubenswrapper[4834]: I0121 16:16:08.959227 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:16:08 crc kubenswrapper[4834]: I0121 16:16:08.961061 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-lqx7b" Jan 21 16:16:08 crc kubenswrapper[4834]: I0121 16:16:08.961613 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 21 16:16:08 crc kubenswrapper[4834]: I0121 16:16:08.961778 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 21 16:16:08 crc kubenswrapper[4834]: I0121 16:16:08.964292 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.095817 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba644516-d083-404a-b5fc-6b5589098b4a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.095919 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba644516-d083-404a-b5fc-6b5589098b4a-config-data\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.096076 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba644516-d083-404a-b5fc-6b5589098b4a-scripts\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.096110 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-627xm\" (UniqueName: \"kubernetes.io/projected/ba644516-d083-404a-b5fc-6b5589098b4a-kube-api-access-627xm\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.198203 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba644516-d083-404a-b5fc-6b5589098b4a-scripts\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.198251 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-627xm\" (UniqueName: \"kubernetes.io/projected/ba644516-d083-404a-b5fc-6b5589098b4a-kube-api-access-627xm\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.198303 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba644516-d083-404a-b5fc-6b5589098b4a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.198400 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba644516-d083-404a-b5fc-6b5589098b4a-config-data\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.205078 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba644516-d083-404a-b5fc-6b5589098b4a-config-data\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.205362 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba644516-d083-404a-b5fc-6b5589098b4a-scripts\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.216053 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-627xm\" (UniqueName: \"kubernetes.io/projected/ba644516-d083-404a-b5fc-6b5589098b4a-kube-api-access-627xm\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.222891 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba644516-d083-404a-b5fc-6b5589098b4a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ba644516-d083-404a-b5fc-6b5589098b4a\") " pod="openstack/aodh-0" Jan 21 16:16:09 crc kubenswrapper[4834]: I0121 16:16:09.277557 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:09.783396 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.275169 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.276703 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="ceilometer-central-agent" containerID="cri-o://73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c" gracePeriod=30 Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.277149 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="sg-core" containerID="cri-o://5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0" gracePeriod=30 Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.277200 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="ceilometer-notification-agent" containerID="cri-o://20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700" gracePeriod=30 Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.277195 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="proxy-httpd" containerID="cri-o://8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37" gracePeriod=30 Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.632362 4834 generic.go:334] "Generic (PLEG): container finished" podID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerID="8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37" exitCode=0 Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.632697 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerDied","Data":"8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37"} Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.632741 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerDied","Data":"5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0"} Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.632705 4834 generic.go:334] "Generic (PLEG): container finished" podID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerID="5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0" exitCode=2 Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.648536 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba644516-d083-404a-b5fc-6b5589098b4a","Type":"ContainerStarted","Data":"04d220229e5d3113c8515c49f514f3f8311beaf965d66fd419e146910d5ec779"} Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.739324 4834 scope.go:117] "RemoveContainer" containerID="f45e243a4d91819ac1c5ada9ccc0853f8c14f5e498877660edeb9e9432136409" Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.799919 4834 scope.go:117] "RemoveContainer" containerID="53b1f07ea3d4b85a75fadca3f9dd1aba6a96dabbc168062a8e560f8e418b852a" Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.851502 4834 scope.go:117] "RemoveContainer" containerID="7ce4cdb8d6434e098aaf222c71f3d4796c8b589cfec5d1888b581ee0f0462973" Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.900742 4834 scope.go:117] "RemoveContainer" containerID="ff6ba458afbbf582c7ab171632a4b753819907f5438ec518b7b5087c186e07e6" Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.958917 4834 scope.go:117] "RemoveContainer" containerID="09f8eb4f4c0ff627c687af12d1c2d318b7b59ddbef2589e50877945780f52d27" Jan 21 16:16:10 crc kubenswrapper[4834]: I0121 16:16:10.998342 4834 scope.go:117] "RemoveContainer" containerID="077572a5bed0cb4a17cbf4f9983f73810dbb8ab174a87932e965df8f7d6dc4f8" Jan 21 16:16:11 crc kubenswrapper[4834]: I0121 16:16:11.668971 4834 generic.go:334] "Generic (PLEG): container finished" podID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerID="73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c" exitCode=0 Jan 21 16:16:11 crc kubenswrapper[4834]: I0121 16:16:11.669018 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerDied","Data":"73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c"} Jan 21 16:16:11 crc kubenswrapper[4834]: I0121 16:16:11.672030 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba644516-d083-404a-b5fc-6b5589098b4a","Type":"ContainerStarted","Data":"9dd6bb68aabb75312d4893f699a3ea4e2d542adc8644af5490bb7693417e0f9b"} Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.150400 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.264622 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-scripts\") pod \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.264706 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-run-httpd\") pod \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.264752 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-log-httpd\") pod \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.264863 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-combined-ca-bundle\") pod \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.264923 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5x4\" (UniqueName: \"kubernetes.io/projected/f19274f0-6d4c-49a8-91bc-550f1b4c7589-kube-api-access-mg5x4\") pod \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.265090 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-config-data\") pod \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.265172 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-sg-core-conf-yaml\") pod \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\" (UID: \"f19274f0-6d4c-49a8-91bc-550f1b4c7589\") " Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.265215 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f19274f0-6d4c-49a8-91bc-550f1b4c7589" (UID: "f19274f0-6d4c-49a8-91bc-550f1b4c7589"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.265659 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f19274f0-6d4c-49a8-91bc-550f1b4c7589" (UID: "f19274f0-6d4c-49a8-91bc-550f1b4c7589"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.266381 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.266401 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f19274f0-6d4c-49a8-91bc-550f1b4c7589-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.272130 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-scripts" (OuterVolumeSpecName: "scripts") pod "f19274f0-6d4c-49a8-91bc-550f1b4c7589" (UID: "f19274f0-6d4c-49a8-91bc-550f1b4c7589"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.272233 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19274f0-6d4c-49a8-91bc-550f1b4c7589-kube-api-access-mg5x4" (OuterVolumeSpecName: "kube-api-access-mg5x4") pod "f19274f0-6d4c-49a8-91bc-550f1b4c7589" (UID: "f19274f0-6d4c-49a8-91bc-550f1b4c7589"). InnerVolumeSpecName "kube-api-access-mg5x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.302976 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f19274f0-6d4c-49a8-91bc-550f1b4c7589" (UID: "f19274f0-6d4c-49a8-91bc-550f1b4c7589"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.348301 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f19274f0-6d4c-49a8-91bc-550f1b4c7589" (UID: "f19274f0-6d4c-49a8-91bc-550f1b4c7589"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.387649 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.387739 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.387904 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.388018 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5x4\" (UniqueName: \"kubernetes.io/projected/f19274f0-6d4c-49a8-91bc-550f1b4c7589-kube-api-access-mg5x4\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.394706 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-config-data" (OuterVolumeSpecName: "config-data") pod "f19274f0-6d4c-49a8-91bc-550f1b4c7589" (UID: "f19274f0-6d4c-49a8-91bc-550f1b4c7589"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.490298 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19274f0-6d4c-49a8-91bc-550f1b4c7589-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.683789 4834 generic.go:334] "Generic (PLEG): container finished" podID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerID="20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700" exitCode=0 Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.683873 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.683891 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerDied","Data":"20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700"} Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.685634 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f19274f0-6d4c-49a8-91bc-550f1b4c7589","Type":"ContainerDied","Data":"1d59bcaca0f2c7d29409f43cd953ffd785ed18183cc835932b00d19b42491354"} Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.685661 4834 scope.go:117] "RemoveContainer" containerID="8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.689653 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba644516-d083-404a-b5fc-6b5589098b4a","Type":"ContainerStarted","Data":"8b0078563a975c334cc4ece7194c70687e7332d3ef0746442782b2333f19f8e4"} Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.709324 4834 scope.go:117] "RemoveContainer" containerID="5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.750060 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.754909 4834 scope.go:117] "RemoveContainer" containerID="20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.770714 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.785594 4834 scope.go:117] "RemoveContainer" containerID="73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.793732 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:12 crc kubenswrapper[4834]: E0121 16:16:12.794439 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="ceilometer-notification-agent" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.794461 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="ceilometer-notification-agent" Jan 21 16:16:12 crc kubenswrapper[4834]: E0121 16:16:12.794476 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="ceilometer-central-agent" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.794484 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="ceilometer-central-agent" Jan 21 16:16:12 crc kubenswrapper[4834]: E0121 16:16:12.794618 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="sg-core" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.794629 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="sg-core" Jan 21 16:16:12 crc kubenswrapper[4834]: E0121 16:16:12.794664 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="proxy-httpd" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.794673 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="proxy-httpd" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.794993 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="proxy-httpd" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.795017 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="ceilometer-central-agent" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.795045 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="ceilometer-notification-agent" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.795080 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" containerName="sg-core" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.799816 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.803113 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.803757 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.808267 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.848056 4834 scope.go:117] "RemoveContainer" containerID="8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37" Jan 21 16:16:12 crc kubenswrapper[4834]: E0121 16:16:12.848774 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37\": container with ID starting with 8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37 not found: ID does not exist" containerID="8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.848824 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37"} err="failed to get container status \"8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37\": rpc error: code = NotFound desc = could not find container \"8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37\": container with ID starting with 8f7a767ff20f8906a9624c116e2fdb9adee5334766f95ac1821787b726968d37 not found: ID does not exist" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.848857 4834 scope.go:117] "RemoveContainer" containerID="5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0" Jan 21 16:16:12 crc kubenswrapper[4834]: E0121 16:16:12.849160 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0\": container with ID starting with 5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0 not found: ID does not exist" containerID="5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.849182 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0"} err="failed to get container status \"5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0\": rpc error: code = NotFound desc = could not find container \"5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0\": container with ID starting with 5c233d179d6876d891f9889eb621974cfa1291ce73ab8e40ca5b65d603d6d0f0 not found: ID does not exist" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.849215 4834 scope.go:117] "RemoveContainer" containerID="20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700" Jan 21 16:16:12 crc kubenswrapper[4834]: E0121 16:16:12.849570 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700\": container with ID starting with 20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700 not found: ID does not exist" containerID="20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.849597 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700"} err="failed to get container status \"20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700\": rpc error: code = NotFound desc = could not find container \"20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700\": container with ID starting with 20592571b97697db3ce0b681e02bd070e5bc318389668ee790d341d323e4e700 not found: ID does not exist" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.849654 4834 scope.go:117] "RemoveContainer" containerID="73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c" Jan 21 16:16:12 crc kubenswrapper[4834]: E0121 16:16:12.850002 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c\": container with ID starting with 73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c not found: ID does not exist" containerID="73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.850030 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c"} err="failed to get container status \"73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c\": rpc error: code = NotFound desc = could not find container \"73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c\": container with ID starting with 73a1660a4f5db9a1b3d9d12eca8da509605df3bb9d36fffe334de9d929efcb0c not found: ID does not exist" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.903157 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.903698 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7m8r\" (UniqueName: \"kubernetes.io/projected/28a438e3-1151-44b6-aac3-a3bc30878c99-kube-api-access-s7m8r\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.903830 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-log-httpd\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.903904 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-run-httpd\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.903957 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-scripts\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.904044 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:12 crc kubenswrapper[4834]: I0121 16:16:12.904121 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-config-data\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.006683 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.007755 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7m8r\" (UniqueName: \"kubernetes.io/projected/28a438e3-1151-44b6-aac3-a3bc30878c99-kube-api-access-s7m8r\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.007834 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-log-httpd\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.007885 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-run-httpd\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.007909 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-scripts\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.007952 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.007991 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-config-data\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.008627 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-run-httpd\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.008726 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-log-httpd\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.015474 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.015671 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-scripts\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.015866 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.019592 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-config-data\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.028154 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7m8r\" (UniqueName: \"kubernetes.io/projected/28a438e3-1151-44b6-aac3-a3bc30878c99-kube-api-access-s7m8r\") pod \"ceilometer-0\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.119351 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.704428 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba644516-d083-404a-b5fc-6b5589098b4a","Type":"ContainerStarted","Data":"9e5654f4981169879ed21624576d39e0c2ded21f16cd15dc10522a7231b38f20"} Jan 21 16:16:13 crc kubenswrapper[4834]: I0121 16:16:13.786186 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:13 crc kubenswrapper[4834]: W0121 16:16:13.786967 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a438e3_1151_44b6_aac3_a3bc30878c99.slice/crio-d6587ec77c2eae5def97513d52e8ae201a23889d90335f9616a9590cc4f3a6b0 WatchSource:0}: Error finding container d6587ec77c2eae5def97513d52e8ae201a23889d90335f9616a9590cc4f3a6b0: Status 404 returned error can't find the container with id d6587ec77c2eae5def97513d52e8ae201a23889d90335f9616a9590cc4f3a6b0 Jan 21 16:16:14 crc kubenswrapper[4834]: I0121 16:16:14.338648 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19274f0-6d4c-49a8-91bc-550f1b4c7589" path="/var/lib/kubelet/pods/f19274f0-6d4c-49a8-91bc-550f1b4c7589/volumes" Jan 21 16:16:14 crc kubenswrapper[4834]: I0121 16:16:14.719429 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerStarted","Data":"3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5"} Jan 21 16:16:14 crc kubenswrapper[4834]: I0121 16:16:14.719482 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerStarted","Data":"d6587ec77c2eae5def97513d52e8ae201a23889d90335f9616a9590cc4f3a6b0"} Jan 21 16:16:15 crc kubenswrapper[4834]: I0121 16:16:15.731292 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerStarted","Data":"65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023"} Jan 21 16:16:15 crc kubenswrapper[4834]: I0121 16:16:15.736509 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ba644516-d083-404a-b5fc-6b5589098b4a","Type":"ContainerStarted","Data":"17408a284661d1752d638e3f7bc05046551f395fdbeb0f844c1e52d4798dc27f"} Jan 21 16:16:15 crc kubenswrapper[4834]: I0121 16:16:15.772622 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.1984961 podStartE2EDuration="7.772597997s" podCreationTimestamp="2026-01-21 16:16:08 +0000 UTC" firstStartedPulling="2026-01-21 16:16:09.793707157 +0000 UTC m=+6315.768056202" lastFinishedPulling="2026-01-21 16:16:15.367809034 +0000 UTC m=+6321.342158099" observedRunningTime="2026-01-21 16:16:15.765158264 +0000 UTC m=+6321.739507309" watchObservedRunningTime="2026-01-21 16:16:15.772597997 +0000 UTC m=+6321.746947042" Jan 21 16:16:16 crc kubenswrapper[4834]: I0121 16:16:16.752188 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerStarted","Data":"53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419"} Jan 21 16:16:17 crc kubenswrapper[4834]: I0121 16:16:17.113517 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:16:17 crc kubenswrapper[4834]: I0121 16:16:17.113599 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:16:18 crc kubenswrapper[4834]: I0121 16:16:18.780208 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerStarted","Data":"7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72"} Jan 21 16:16:18 crc kubenswrapper[4834]: I0121 16:16:18.781059 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.461505 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.774584782 podStartE2EDuration="9.461479076s" podCreationTimestamp="2026-01-21 16:16:12 +0000 UTC" firstStartedPulling="2026-01-21 16:16:13.789997377 +0000 UTC m=+6319.764346422" lastFinishedPulling="2026-01-21 16:16:17.476891671 +0000 UTC m=+6323.451240716" observedRunningTime="2026-01-21 16:16:18.811288193 +0000 UTC m=+6324.785637248" watchObservedRunningTime="2026-01-21 16:16:21.461479076 +0000 UTC m=+6327.435828121" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.473036 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-2zsnf"] Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.474824 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.494540 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-2zsnf"] Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.511395 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gj2r\" (UniqueName: \"kubernetes.io/projected/a7423589-b71a-4f9d-967f-ea1591657d19-kube-api-access-4gj2r\") pod \"manila-db-create-2zsnf\" (UID: \"a7423589-b71a-4f9d-967f-ea1591657d19\") " pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.511473 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7423589-b71a-4f9d-967f-ea1591657d19-operator-scripts\") pod \"manila-db-create-2zsnf\" (UID: \"a7423589-b71a-4f9d-967f-ea1591657d19\") " pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.589670 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-42f3-account-create-update-tq7hk"] Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.591973 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.594794 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.613945 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7423589-b71a-4f9d-967f-ea1591657d19-operator-scripts\") pod \"manila-db-create-2zsnf\" (UID: \"a7423589-b71a-4f9d-967f-ea1591657d19\") " pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.614865 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7423589-b71a-4f9d-967f-ea1591657d19-operator-scripts\") pod \"manila-db-create-2zsnf\" (UID: \"a7423589-b71a-4f9d-967f-ea1591657d19\") " pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.615089 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gj2r\" (UniqueName: \"kubernetes.io/projected/a7423589-b71a-4f9d-967f-ea1591657d19-kube-api-access-4gj2r\") pod \"manila-db-create-2zsnf\" (UID: \"a7423589-b71a-4f9d-967f-ea1591657d19\") " pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.615412 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-42f3-account-create-update-tq7hk"] Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.651479 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gj2r\" (UniqueName: \"kubernetes.io/projected/a7423589-b71a-4f9d-967f-ea1591657d19-kube-api-access-4gj2r\") pod \"manila-db-create-2zsnf\" (UID: \"a7423589-b71a-4f9d-967f-ea1591657d19\") " pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.717039 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbv5b\" (UniqueName: \"kubernetes.io/projected/9c9abaa2-24c9-448b-b82c-15cf02145f6c-kube-api-access-kbv5b\") pod \"manila-42f3-account-create-update-tq7hk\" (UID: \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\") " pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.717095 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9abaa2-24c9-448b-b82c-15cf02145f6c-operator-scripts\") pod \"manila-42f3-account-create-update-tq7hk\" (UID: \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\") " pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.798348 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.818807 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbv5b\" (UniqueName: \"kubernetes.io/projected/9c9abaa2-24c9-448b-b82c-15cf02145f6c-kube-api-access-kbv5b\") pod \"manila-42f3-account-create-update-tq7hk\" (UID: \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\") " pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.818864 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9abaa2-24c9-448b-b82c-15cf02145f6c-operator-scripts\") pod \"manila-42f3-account-create-update-tq7hk\" (UID: \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\") " pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.819590 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9abaa2-24c9-448b-b82c-15cf02145f6c-operator-scripts\") pod \"manila-42f3-account-create-update-tq7hk\" (UID: \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\") " pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.848359 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbv5b\" (UniqueName: \"kubernetes.io/projected/9c9abaa2-24c9-448b-b82c-15cf02145f6c-kube-api-access-kbv5b\") pod \"manila-42f3-account-create-update-tq7hk\" (UID: \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\") " pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:21 crc kubenswrapper[4834]: I0121 16:16:21.926772 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:22 crc kubenswrapper[4834]: W0121 16:16:22.328066 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7423589_b71a_4f9d_967f_ea1591657d19.slice/crio-071a172bbbde9aad22c634353d02ddcab0fba44e9d4d9e3ea92f5b51bf6d0298 WatchSource:0}: Error finding container 071a172bbbde9aad22c634353d02ddcab0fba44e9d4d9e3ea92f5b51bf6d0298: Status 404 returned error can't find the container with id 071a172bbbde9aad22c634353d02ddcab0fba44e9d4d9e3ea92f5b51bf6d0298 Jan 21 16:16:22 crc kubenswrapper[4834]: I0121 16:16:22.335854 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-2zsnf"] Jan 21 16:16:22 crc kubenswrapper[4834]: W0121 16:16:22.486755 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c9abaa2_24c9_448b_b82c_15cf02145f6c.slice/crio-2113c7f617648a20fa4514278957d4cddc82577cf7376329660f47092c01c204 WatchSource:0}: Error finding container 2113c7f617648a20fa4514278957d4cddc82577cf7376329660f47092c01c204: Status 404 returned error can't find the container with id 2113c7f617648a20fa4514278957d4cddc82577cf7376329660f47092c01c204 Jan 21 16:16:22 crc kubenswrapper[4834]: I0121 16:16:22.489550 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-42f3-account-create-update-tq7hk"] Jan 21 16:16:22 crc kubenswrapper[4834]: I0121 16:16:22.823731 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-42f3-account-create-update-tq7hk" event={"ID":"9c9abaa2-24c9-448b-b82c-15cf02145f6c","Type":"ContainerStarted","Data":"1c62c4a7b44307e473935e5922af7bcf55ecff5044585fba505b331a98116f17"} Jan 21 16:16:22 crc kubenswrapper[4834]: I0121 16:16:22.823813 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-42f3-account-create-update-tq7hk" event={"ID":"9c9abaa2-24c9-448b-b82c-15cf02145f6c","Type":"ContainerStarted","Data":"2113c7f617648a20fa4514278957d4cddc82577cf7376329660f47092c01c204"} Jan 21 16:16:22 crc kubenswrapper[4834]: I0121 16:16:22.826360 4834 generic.go:334] "Generic (PLEG): container finished" podID="a7423589-b71a-4f9d-967f-ea1591657d19" containerID="d70c6dd8dabcd18df5ca365a15f151a9b1c6f5ff288e5d56877cb395583d1523" exitCode=0 Jan 21 16:16:22 crc kubenswrapper[4834]: I0121 16:16:22.826429 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2zsnf" event={"ID":"a7423589-b71a-4f9d-967f-ea1591657d19","Type":"ContainerDied","Data":"d70c6dd8dabcd18df5ca365a15f151a9b1c6f5ff288e5d56877cb395583d1523"} Jan 21 16:16:22 crc kubenswrapper[4834]: I0121 16:16:22.826473 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2zsnf" event={"ID":"a7423589-b71a-4f9d-967f-ea1591657d19","Type":"ContainerStarted","Data":"071a172bbbde9aad22c634353d02ddcab0fba44e9d4d9e3ea92f5b51bf6d0298"} Jan 21 16:16:22 crc kubenswrapper[4834]: I0121 16:16:22.838716 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-42f3-account-create-update-tq7hk" podStartSLOduration=1.8386920039999999 podStartE2EDuration="1.838692004s" podCreationTimestamp="2026-01-21 16:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:16:22.836388302 +0000 UTC m=+6328.810737347" watchObservedRunningTime="2026-01-21 16:16:22.838692004 +0000 UTC m=+6328.813041059" Jan 21 16:16:23 crc kubenswrapper[4834]: I0121 16:16:23.835269 4834 generic.go:334] "Generic (PLEG): container finished" podID="9c9abaa2-24c9-448b-b82c-15cf02145f6c" containerID="1c62c4a7b44307e473935e5922af7bcf55ecff5044585fba505b331a98116f17" exitCode=0 Jan 21 16:16:23 crc kubenswrapper[4834]: I0121 16:16:23.835423 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-42f3-account-create-update-tq7hk" event={"ID":"9c9abaa2-24c9-448b-b82c-15cf02145f6c","Type":"ContainerDied","Data":"1c62c4a7b44307e473935e5922af7bcf55ecff5044585fba505b331a98116f17"} Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.276787 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.474220 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gj2r\" (UniqueName: \"kubernetes.io/projected/a7423589-b71a-4f9d-967f-ea1591657d19-kube-api-access-4gj2r\") pod \"a7423589-b71a-4f9d-967f-ea1591657d19\" (UID: \"a7423589-b71a-4f9d-967f-ea1591657d19\") " Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.474389 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7423589-b71a-4f9d-967f-ea1591657d19-operator-scripts\") pod \"a7423589-b71a-4f9d-967f-ea1591657d19\" (UID: \"a7423589-b71a-4f9d-967f-ea1591657d19\") " Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.475344 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7423589-b71a-4f9d-967f-ea1591657d19-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7423589-b71a-4f9d-967f-ea1591657d19" (UID: "a7423589-b71a-4f9d-967f-ea1591657d19"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.475848 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7423589-b71a-4f9d-967f-ea1591657d19-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.480249 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7423589-b71a-4f9d-967f-ea1591657d19-kube-api-access-4gj2r" (OuterVolumeSpecName: "kube-api-access-4gj2r") pod "a7423589-b71a-4f9d-967f-ea1591657d19" (UID: "a7423589-b71a-4f9d-967f-ea1591657d19"). InnerVolumeSpecName "kube-api-access-4gj2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.577783 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gj2r\" (UniqueName: \"kubernetes.io/projected/a7423589-b71a-4f9d-967f-ea1591657d19-kube-api-access-4gj2r\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.851624 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-2zsnf" event={"ID":"a7423589-b71a-4f9d-967f-ea1591657d19","Type":"ContainerDied","Data":"071a172bbbde9aad22c634353d02ddcab0fba44e9d4d9e3ea92f5b51bf6d0298"} Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.852078 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="071a172bbbde9aad22c634353d02ddcab0fba44e9d4d9e3ea92f5b51bf6d0298" Jan 21 16:16:24 crc kubenswrapper[4834]: I0121 16:16:24.851643 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-2zsnf" Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.278189 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.394527 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbv5b\" (UniqueName: \"kubernetes.io/projected/9c9abaa2-24c9-448b-b82c-15cf02145f6c-kube-api-access-kbv5b\") pod \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\" (UID: \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\") " Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.394612 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9abaa2-24c9-448b-b82c-15cf02145f6c-operator-scripts\") pod \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\" (UID: \"9c9abaa2-24c9-448b-b82c-15cf02145f6c\") " Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.395765 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c9abaa2-24c9-448b-b82c-15cf02145f6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c9abaa2-24c9-448b-b82c-15cf02145f6c" (UID: "9c9abaa2-24c9-448b-b82c-15cf02145f6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.402138 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9abaa2-24c9-448b-b82c-15cf02145f6c-kube-api-access-kbv5b" (OuterVolumeSpecName: "kube-api-access-kbv5b") pod "9c9abaa2-24c9-448b-b82c-15cf02145f6c" (UID: "9c9abaa2-24c9-448b-b82c-15cf02145f6c"). InnerVolumeSpecName "kube-api-access-kbv5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.497132 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbv5b\" (UniqueName: \"kubernetes.io/projected/9c9abaa2-24c9-448b-b82c-15cf02145f6c-kube-api-access-kbv5b\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.497168 4834 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c9abaa2-24c9-448b-b82c-15cf02145f6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.886581 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-42f3-account-create-update-tq7hk" event={"ID":"9c9abaa2-24c9-448b-b82c-15cf02145f6c","Type":"ContainerDied","Data":"2113c7f617648a20fa4514278957d4cddc82577cf7376329660f47092c01c204"} Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.886643 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2113c7f617648a20fa4514278957d4cddc82577cf7376329660f47092c01c204" Jan 21 16:16:25 crc kubenswrapper[4834]: I0121 16:16:25.888041 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-42f3-account-create-update-tq7hk" Jan 21 16:16:26 crc kubenswrapper[4834]: I0121 16:16:26.951031 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-hxrb7"] Jan 21 16:16:26 crc kubenswrapper[4834]: E0121 16:16:26.952167 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9abaa2-24c9-448b-b82c-15cf02145f6c" containerName="mariadb-account-create-update" Jan 21 16:16:26 crc kubenswrapper[4834]: I0121 16:16:26.952188 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9abaa2-24c9-448b-b82c-15cf02145f6c" containerName="mariadb-account-create-update" Jan 21 16:16:26 crc kubenswrapper[4834]: E0121 16:16:26.952225 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7423589-b71a-4f9d-967f-ea1591657d19" containerName="mariadb-database-create" Jan 21 16:16:26 crc kubenswrapper[4834]: I0121 16:16:26.952233 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7423589-b71a-4f9d-967f-ea1591657d19" containerName="mariadb-database-create" Jan 21 16:16:26 crc kubenswrapper[4834]: I0121 16:16:26.952458 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7423589-b71a-4f9d-967f-ea1591657d19" containerName="mariadb-database-create" Jan 21 16:16:26 crc kubenswrapper[4834]: I0121 16:16:26.952478 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9abaa2-24c9-448b-b82c-15cf02145f6c" containerName="mariadb-account-create-update" Jan 21 16:16:26 crc kubenswrapper[4834]: I0121 16:16:26.953362 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:26 crc kubenswrapper[4834]: I0121 16:16:26.955714 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-pshzq" Jan 21 16:16:26 crc kubenswrapper[4834]: I0121 16:16:26.955966 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 16:16:26 crc kubenswrapper[4834]: I0121 16:16:26.962480 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-hxrb7"] Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.132383 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-job-config-data\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.132473 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59hs\" (UniqueName: \"kubernetes.io/projected/f1b1064e-c353-4f3a-bacb-739206fcde86-kube-api-access-j59hs\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.132519 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-config-data\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.132585 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-combined-ca-bundle\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.234188 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-job-config-data\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.234311 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59hs\" (UniqueName: \"kubernetes.io/projected/f1b1064e-c353-4f3a-bacb-739206fcde86-kube-api-access-j59hs\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.234370 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-config-data\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.234456 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-combined-ca-bundle\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.245596 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-combined-ca-bundle\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.245999 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-job-config-data\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.246270 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-config-data\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.251064 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59hs\" (UniqueName: \"kubernetes.io/projected/f1b1064e-c353-4f3a-bacb-739206fcde86-kube-api-access-j59hs\") pod \"manila-db-sync-hxrb7\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:27 crc kubenswrapper[4834]: I0121 16:16:27.275675 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:28 crc kubenswrapper[4834]: I0121 16:16:28.015722 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-hxrb7"] Jan 21 16:16:28 crc kubenswrapper[4834]: W0121 16:16:28.029105 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1b1064e_c353_4f3a_bacb_739206fcde86.slice/crio-2706e7bb6069410ef0722d04e5fe16caef8b2b727f50fc02e2e1c5206ea6ac23 WatchSource:0}: Error finding container 2706e7bb6069410ef0722d04e5fe16caef8b2b727f50fc02e2e1c5206ea6ac23: Status 404 returned error can't find the container with id 2706e7bb6069410ef0722d04e5fe16caef8b2b727f50fc02e2e1c5206ea6ac23 Jan 21 16:16:28 crc kubenswrapper[4834]: I0121 16:16:28.915712 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hxrb7" event={"ID":"f1b1064e-c353-4f3a-bacb-739206fcde86","Type":"ContainerStarted","Data":"2706e7bb6069410ef0722d04e5fe16caef8b2b727f50fc02e2e1c5206ea6ac23"} Jan 21 16:16:34 crc kubenswrapper[4834]: I0121 16:16:34.985527 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hxrb7" event={"ID":"f1b1064e-c353-4f3a-bacb-739206fcde86","Type":"ContainerStarted","Data":"7af80ffc4ea331b8df5306b01017645f0cc1c9700125ef7af339afd0f59ba1dd"} Jan 21 16:16:35 crc kubenswrapper[4834]: I0121 16:16:35.006295 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-hxrb7" podStartSLOduration=2.889066353 podStartE2EDuration="9.006275058s" podCreationTimestamp="2026-01-21 16:16:26 +0000 UTC" firstStartedPulling="2026-01-21 16:16:28.03273602 +0000 UTC m=+6334.007085065" lastFinishedPulling="2026-01-21 16:16:34.149944725 +0000 UTC m=+6340.124293770" observedRunningTime="2026-01-21 16:16:34.999707074 +0000 UTC m=+6340.974056139" watchObservedRunningTime="2026-01-21 16:16:35.006275058 +0000 UTC m=+6340.980624103" Jan 21 16:16:35 crc kubenswrapper[4834]: I0121 16:16:35.041891 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-k5ptb"] Jan 21 16:16:35 crc kubenswrapper[4834]: I0121 16:16:35.053098 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-k5ptb"] Jan 21 16:16:36 crc kubenswrapper[4834]: I0121 16:16:36.039593 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3044-account-create-update-kvmlb"] Jan 21 16:16:36 crc kubenswrapper[4834]: I0121 16:16:36.054526 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3044-account-create-update-kvmlb"] Jan 21 16:16:36 crc kubenswrapper[4834]: I0121 16:16:36.351121 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06969108-96f5-4459-807d-a648e2ccb025" path="/var/lib/kubelet/pods/06969108-96f5-4459-807d-a648e2ccb025/volumes" Jan 21 16:16:36 crc kubenswrapper[4834]: I0121 16:16:36.352309 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a879ebb7-ee92-4e94-bf4d-17c2549b9c60" path="/var/lib/kubelet/pods/a879ebb7-ee92-4e94-bf4d-17c2549b9c60/volumes" Jan 21 16:16:37 crc kubenswrapper[4834]: I0121 16:16:37.009676 4834 generic.go:334] "Generic (PLEG): container finished" podID="f1b1064e-c353-4f3a-bacb-739206fcde86" containerID="7af80ffc4ea331b8df5306b01017645f0cc1c9700125ef7af339afd0f59ba1dd" exitCode=0 Jan 21 16:16:37 crc kubenswrapper[4834]: I0121 16:16:37.009731 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hxrb7" event={"ID":"f1b1064e-c353-4f3a-bacb-739206fcde86","Type":"ContainerDied","Data":"7af80ffc4ea331b8df5306b01017645f0cc1c9700125ef7af339afd0f59ba1dd"} Jan 21 16:16:38 crc kubenswrapper[4834]: I0121 16:16:38.979178 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.031424 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hxrb7" event={"ID":"f1b1064e-c353-4f3a-bacb-739206fcde86","Type":"ContainerDied","Data":"2706e7bb6069410ef0722d04e5fe16caef8b2b727f50fc02e2e1c5206ea6ac23"} Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.031469 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2706e7bb6069410ef0722d04e5fe16caef8b2b727f50fc02e2e1c5206ea6ac23" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.031523 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hxrb7" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.127180 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-config-data\") pod \"f1b1064e-c353-4f3a-bacb-739206fcde86\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.127562 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-combined-ca-bundle\") pod \"f1b1064e-c353-4f3a-bacb-739206fcde86\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.127597 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-job-config-data\") pod \"f1b1064e-c353-4f3a-bacb-739206fcde86\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.127709 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j59hs\" (UniqueName: \"kubernetes.io/projected/f1b1064e-c353-4f3a-bacb-739206fcde86-kube-api-access-j59hs\") pod \"f1b1064e-c353-4f3a-bacb-739206fcde86\" (UID: \"f1b1064e-c353-4f3a-bacb-739206fcde86\") " Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.133368 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b1064e-c353-4f3a-bacb-739206fcde86-kube-api-access-j59hs" (OuterVolumeSpecName: "kube-api-access-j59hs") pod "f1b1064e-c353-4f3a-bacb-739206fcde86" (UID: "f1b1064e-c353-4f3a-bacb-739206fcde86"). InnerVolumeSpecName "kube-api-access-j59hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.135638 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-config-data" (OuterVolumeSpecName: "config-data") pod "f1b1064e-c353-4f3a-bacb-739206fcde86" (UID: "f1b1064e-c353-4f3a-bacb-739206fcde86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.136423 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "f1b1064e-c353-4f3a-bacb-739206fcde86" (UID: "f1b1064e-c353-4f3a-bacb-739206fcde86"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.164824 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1b1064e-c353-4f3a-bacb-739206fcde86" (UID: "f1b1064e-c353-4f3a-bacb-739206fcde86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.231558 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j59hs\" (UniqueName: \"kubernetes.io/projected/f1b1064e-c353-4f3a-bacb-739206fcde86-kube-api-access-j59hs\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.231598 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.231612 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.231624 4834 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f1b1064e-c353-4f3a-bacb-739206fcde86-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.373432 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:16:39 crc kubenswrapper[4834]: E0121 16:16:39.373995 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b1064e-c353-4f3a-bacb-739206fcde86" containerName="manila-db-sync" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.374014 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b1064e-c353-4f3a-bacb-739206fcde86" containerName="manila-db-sync" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.374261 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b1064e-c353-4f3a-bacb-739206fcde86" containerName="manila-db-sync" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.375438 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.379544 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.380024 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-pshzq" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.380175 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.380205 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.391823 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.393654 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.400423 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.437907 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.523375 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548563 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548616 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584xz\" (UniqueName: \"kubernetes.io/projected/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-kube-api-access-584xz\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548658 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548678 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548710 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-scripts\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548743 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-scripts\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548756 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-config-data\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548776 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6ln5\" (UniqueName: \"kubernetes.io/projected/629902f5-4954-484d-ba2d-a1c356bd7c68-kube-api-access-v6ln5\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548802 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548829 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548855 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-ceph\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548882 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/629902f5-4954-484d-ba2d-a1c356bd7c68-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.548908 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.549002 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-config-data\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.586361 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-787c4d466c-fgvgr"] Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.588320 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.600485 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-787c4d466c-fgvgr"] Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651369 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-scripts\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651464 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-scripts\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651492 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-config-data\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651532 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6ln5\" (UniqueName: \"kubernetes.io/projected/629902f5-4954-484d-ba2d-a1c356bd7c68-kube-api-access-v6ln5\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651585 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651638 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651670 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-ceph\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651723 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/629902f5-4954-484d-ba2d-a1c356bd7c68-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651781 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651902 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-config-data\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.651997 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.652044 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584xz\" (UniqueName: \"kubernetes.io/projected/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-kube-api-access-584xz\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.652128 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.652163 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.652745 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.656449 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-scripts\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.659350 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.659774 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-scripts\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.662775 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/629902f5-4954-484d-ba2d-a1c356bd7c68-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.663475 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.664018 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.667001 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.672690 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-ceph\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.673370 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-config-data\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.673530 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-config-data\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.673949 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.677223 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.679189 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.679777 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629902f5-4954-484d-ba2d-a1c356bd7c68-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.682062 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.689628 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584xz\" (UniqueName: \"kubernetes.io/projected/f0747c6b-3d55-4fd7-afa8-2bdac4a772c4-kube-api-access-584xz\") pod \"manila-share-share1-0\" (UID: \"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4\") " pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.697618 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6ln5\" (UniqueName: \"kubernetes.io/projected/629902f5-4954-484d-ba2d-a1c356bd7c68-kube-api-access-v6ln5\") pod \"manila-scheduler-0\" (UID: \"629902f5-4954-484d-ba2d-a1c356bd7c68\") " pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.738073 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755011 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44fe3c57-e369-4ae0-ab19-2dcbb1179714-etc-machine-id\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755298 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-nb\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755380 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755424 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-config-data-custom\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755482 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-config\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755522 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44fe3c57-e369-4ae0-ab19-2dcbb1179714-logs\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755628 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-sb\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755712 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-scripts\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755729 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zmxp\" (UniqueName: \"kubernetes.io/projected/44fe3c57-e369-4ae0-ab19-2dcbb1179714-kube-api-access-9zmxp\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755779 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-config-data\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755810 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-dns-svc\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.755880 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltx5s\" (UniqueName: \"kubernetes.io/projected/00c04a6c-6896-452c-9864-1d7bcc774786-kube-api-access-ltx5s\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.806580 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.858052 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44fe3c57-e369-4ae0-ab19-2dcbb1179714-etc-machine-id\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.858375 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44fe3c57-e369-4ae0-ab19-2dcbb1179714-etc-machine-id\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.858563 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-nb\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.858591 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.859413 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-nb\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.859462 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-config-data-custom\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.859500 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-config\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.860109 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-config\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.860108 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44fe3c57-e369-4ae0-ab19-2dcbb1179714-logs\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.860170 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44fe3c57-e369-4ae0-ab19-2dcbb1179714-logs\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.860220 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-sb\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.860276 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-scripts\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.860294 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zmxp\" (UniqueName: \"kubernetes.io/projected/44fe3c57-e369-4ae0-ab19-2dcbb1179714-kube-api-access-9zmxp\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.860339 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-config-data\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.860360 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-dns-svc\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.860439 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltx5s\" (UniqueName: \"kubernetes.io/projected/00c04a6c-6896-452c-9864-1d7bcc774786-kube-api-access-ltx5s\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.861730 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-sb\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.863980 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-dns-svc\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.864863 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.866412 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-scripts\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.868573 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-config-data-custom\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.879208 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44fe3c57-e369-4ae0-ab19-2dcbb1179714-config-data\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.891715 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltx5s\" (UniqueName: \"kubernetes.io/projected/00c04a6c-6896-452c-9864-1d7bcc774786-kube-api-access-ltx5s\") pod \"dnsmasq-dns-787c4d466c-fgvgr\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.893354 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zmxp\" (UniqueName: \"kubernetes.io/projected/44fe3c57-e369-4ae0-ab19-2dcbb1179714-kube-api-access-9zmxp\") pod \"manila-api-0\" (UID: \"44fe3c57-e369-4ae0-ab19-2dcbb1179714\") " pod="openstack/manila-api-0" Jan 21 16:16:39 crc kubenswrapper[4834]: I0121 16:16:39.920210 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:40 crc kubenswrapper[4834]: I0121 16:16:40.059503 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 21 16:16:40 crc kubenswrapper[4834]: W0121 16:16:40.357491 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod629902f5_4954_484d_ba2d_a1c356bd7c68.slice/crio-33460585482e66ea4ecbbd88f3689e95474b5d1533043732edc6238272589169 WatchSource:0}: Error finding container 33460585482e66ea4ecbbd88f3689e95474b5d1533043732edc6238272589169: Status 404 returned error can't find the container with id 33460585482e66ea4ecbbd88f3689e95474b5d1533043732edc6238272589169 Jan 21 16:16:40 crc kubenswrapper[4834]: I0121 16:16:40.362825 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 21 16:16:40 crc kubenswrapper[4834]: I0121 16:16:40.651217 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 21 16:16:40 crc kubenswrapper[4834]: I0121 16:16:40.702513 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-787c4d466c-fgvgr"] Jan 21 16:16:40 crc kubenswrapper[4834]: I0121 16:16:40.933159 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 21 16:16:41 crc kubenswrapper[4834]: I0121 16:16:41.066305 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4","Type":"ContainerStarted","Data":"70adeb4e0dcfdefe1b3aa38b95836f1a2e4225d03ee222eeea88561b96fe97d5"} Jan 21 16:16:41 crc kubenswrapper[4834]: I0121 16:16:41.083904 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"44fe3c57-e369-4ae0-ab19-2dcbb1179714","Type":"ContainerStarted","Data":"edad041ec4edb0e5fda331c21fcd7c4ef9ec018612b477ee400ff8a1b873dabd"} Jan 21 16:16:41 crc kubenswrapper[4834]: I0121 16:16:41.086968 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" event={"ID":"00c04a6c-6896-452c-9864-1d7bcc774786","Type":"ContainerStarted","Data":"6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde"} Jan 21 16:16:41 crc kubenswrapper[4834]: I0121 16:16:41.087015 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" event={"ID":"00c04a6c-6896-452c-9864-1d7bcc774786","Type":"ContainerStarted","Data":"da2fdd868f49eec39ee8dd01a8b65363c0e99d78d789a03a26aa2c330cc11424"} Jan 21 16:16:41 crc kubenswrapper[4834]: I0121 16:16:41.090757 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"629902f5-4954-484d-ba2d-a1c356bd7c68","Type":"ContainerStarted","Data":"33460585482e66ea4ecbbd88f3689e95474b5d1533043732edc6238272589169"} Jan 21 16:16:42 crc kubenswrapper[4834]: I0121 16:16:42.110360 4834 generic.go:334] "Generic (PLEG): container finished" podID="00c04a6c-6896-452c-9864-1d7bcc774786" containerID="6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde" exitCode=0 Jan 21 16:16:42 crc kubenswrapper[4834]: I0121 16:16:42.110476 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" event={"ID":"00c04a6c-6896-452c-9864-1d7bcc774786","Type":"ContainerDied","Data":"6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde"} Jan 21 16:16:42 crc kubenswrapper[4834]: I0121 16:16:42.110909 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" event={"ID":"00c04a6c-6896-452c-9864-1d7bcc774786","Type":"ContainerStarted","Data":"a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7"} Jan 21 16:16:42 crc kubenswrapper[4834]: I0121 16:16:42.110943 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:42 crc kubenswrapper[4834]: I0121 16:16:42.113678 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"629902f5-4954-484d-ba2d-a1c356bd7c68","Type":"ContainerStarted","Data":"081fa7f3d2a22e8aa806a914c16fdba5550232a5ff62ee59ba72dcdc42eff360"} Jan 21 16:16:42 crc kubenswrapper[4834]: I0121 16:16:42.115845 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"44fe3c57-e369-4ae0-ab19-2dcbb1179714","Type":"ContainerStarted","Data":"77cf5a0d567618159cee849d4b5c6a89ca160cb2e7ae988b89c404caafb0822e"} Jan 21 16:16:42 crc kubenswrapper[4834]: I0121 16:16:42.143104 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" podStartSLOduration=3.143081141 podStartE2EDuration="3.143081141s" podCreationTimestamp="2026-01-21 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:16:42.133239994 +0000 UTC m=+6348.107589049" watchObservedRunningTime="2026-01-21 16:16:42.143081141 +0000 UTC m=+6348.117430186" Jan 21 16:16:43 crc kubenswrapper[4834]: I0121 16:16:43.063915 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dds77"] Jan 21 16:16:43 crc kubenswrapper[4834]: I0121 16:16:43.080316 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dds77"] Jan 21 16:16:43 crc kubenswrapper[4834]: I0121 16:16:43.134793 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:16:43 crc kubenswrapper[4834]: I0121 16:16:43.149177 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"44fe3c57-e369-4ae0-ab19-2dcbb1179714","Type":"ContainerStarted","Data":"0617e9a562041f215a4b1696feaec933832bcb206b2a710555e0af3b490f3b24"} Jan 21 16:16:43 crc kubenswrapper[4834]: I0121 16:16:43.149259 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 21 16:16:43 crc kubenswrapper[4834]: I0121 16:16:43.173486 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"629902f5-4954-484d-ba2d-a1c356bd7c68","Type":"ContainerStarted","Data":"05cb60706775e8efd8c6fd2a4439c3131d1ba339a64b4805e4beef13f9e2244e"} Jan 21 16:16:43 crc kubenswrapper[4834]: I0121 16:16:43.207568 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.207542279 podStartE2EDuration="4.207542279s" podCreationTimestamp="2026-01-21 16:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:16:43.206780105 +0000 UTC m=+6349.181129170" watchObservedRunningTime="2026-01-21 16:16:43.207542279 +0000 UTC m=+6349.181891324" Jan 21 16:16:43 crc kubenswrapper[4834]: I0121 16:16:43.250885 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.599858546 podStartE2EDuration="4.250855791s" podCreationTimestamp="2026-01-21 16:16:39 +0000 UTC" firstStartedPulling="2026-01-21 16:16:40.360290557 +0000 UTC m=+6346.334639602" lastFinishedPulling="2026-01-21 16:16:41.011287802 +0000 UTC m=+6346.985636847" observedRunningTime="2026-01-21 16:16:43.22744868 +0000 UTC m=+6349.201797735" watchObservedRunningTime="2026-01-21 16:16:43.250855791 +0000 UTC m=+6349.225204836" Jan 21 16:16:44 crc kubenswrapper[4834]: I0121 16:16:44.343378 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13e195d7-9f94-44cd-8b1f-74631ce95c58" path="/var/lib/kubelet/pods/13e195d7-9f94-44cd-8b1f-74631ce95c58/volumes" Jan 21 16:16:47 crc kubenswrapper[4834]: I0121 16:16:47.113977 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:16:47 crc kubenswrapper[4834]: I0121 16:16:47.114497 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:16:49 crc kubenswrapper[4834]: I0121 16:16:49.739177 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 21 16:16:49 crc kubenswrapper[4834]: I0121 16:16:49.921788 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:16:49 crc kubenswrapper[4834]: I0121 16:16:49.988631 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5c47fd59-dkcb5"] Jan 21 16:16:49 crc kubenswrapper[4834]: I0121 16:16:49.989123 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" podUID="da2ce2de-e312-4435-98e7-bd7e410a2f15" containerName="dnsmasq-dns" containerID="cri-o://32a695321f1dd740a129b48beb244bd26e742e68ef38ce8642e3ab77acad2c6f" gracePeriod=10 Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.269040 4834 generic.go:334] "Generic (PLEG): container finished" podID="da2ce2de-e312-4435-98e7-bd7e410a2f15" containerID="32a695321f1dd740a129b48beb244bd26e742e68ef38ce8642e3ab77acad2c6f" exitCode=0 Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.269205 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" event={"ID":"da2ce2de-e312-4435-98e7-bd7e410a2f15","Type":"ContainerDied","Data":"32a695321f1dd740a129b48beb244bd26e742e68ef38ce8642e3ab77acad2c6f"} Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.282190 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4","Type":"ContainerStarted","Data":"53cfbc00d38c2bfa4486d074210a9a11c50605c3f326ef3fd62898522dfd87f3"} Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.282242 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f0747c6b-3d55-4fd7-afa8-2bdac4a772c4","Type":"ContainerStarted","Data":"731d2ab3a42bf88f0ea5235d6664bb1b6759299105030507ae61d08742077d05"} Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.358192 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.229054117 podStartE2EDuration="11.358169542s" podCreationTimestamp="2026-01-21 16:16:39 +0000 UTC" firstStartedPulling="2026-01-21 16:16:40.665031477 +0000 UTC m=+6346.639380522" lastFinishedPulling="2026-01-21 16:16:48.794146902 +0000 UTC m=+6354.768495947" observedRunningTime="2026-01-21 16:16:50.343587225 +0000 UTC m=+6356.317936260" watchObservedRunningTime="2026-01-21 16:16:50.358169542 +0000 UTC m=+6356.332518597" Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.669077 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.790970 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-config\") pod \"da2ce2de-e312-4435-98e7-bd7e410a2f15\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.791125 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jz9g\" (UniqueName: \"kubernetes.io/projected/da2ce2de-e312-4435-98e7-bd7e410a2f15-kube-api-access-7jz9g\") pod \"da2ce2de-e312-4435-98e7-bd7e410a2f15\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.791190 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-dns-svc\") pod \"da2ce2de-e312-4435-98e7-bd7e410a2f15\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.791321 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-nb\") pod \"da2ce2de-e312-4435-98e7-bd7e410a2f15\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.791418 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-sb\") pod \"da2ce2de-e312-4435-98e7-bd7e410a2f15\" (UID: \"da2ce2de-e312-4435-98e7-bd7e410a2f15\") " Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.832584 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2ce2de-e312-4435-98e7-bd7e410a2f15-kube-api-access-7jz9g" (OuterVolumeSpecName: "kube-api-access-7jz9g") pod "da2ce2de-e312-4435-98e7-bd7e410a2f15" (UID: "da2ce2de-e312-4435-98e7-bd7e410a2f15"). InnerVolumeSpecName "kube-api-access-7jz9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.895846 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jz9g\" (UniqueName: \"kubernetes.io/projected/da2ce2de-e312-4435-98e7-bd7e410a2f15-kube-api-access-7jz9g\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.996016 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da2ce2de-e312-4435-98e7-bd7e410a2f15" (UID: "da2ce2de-e312-4435-98e7-bd7e410a2f15"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:50 crc kubenswrapper[4834]: I0121 16:16:50.997705 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.005121 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da2ce2de-e312-4435-98e7-bd7e410a2f15" (UID: "da2ce2de-e312-4435-98e7-bd7e410a2f15"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.019886 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da2ce2de-e312-4435-98e7-bd7e410a2f15" (UID: "da2ce2de-e312-4435-98e7-bd7e410a2f15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.023695 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-config" (OuterVolumeSpecName: "config") pod "da2ce2de-e312-4435-98e7-bd7e410a2f15" (UID: "da2ce2de-e312-4435-98e7-bd7e410a2f15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.099688 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.099729 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.099737 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da2ce2de-e312-4435-98e7-bd7e410a2f15-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.295595 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" event={"ID":"da2ce2de-e312-4435-98e7-bd7e410a2f15","Type":"ContainerDied","Data":"53292b75d7d7b8244fe564f178f5f005bc295c9d16f8c62345a4fa4293f1ed01"} Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.295656 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d5c47fd59-dkcb5" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.295671 4834 scope.go:117] "RemoveContainer" containerID="32a695321f1dd740a129b48beb244bd26e742e68ef38ce8642e3ab77acad2c6f" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.327869 4834 scope.go:117] "RemoveContainer" containerID="1ae68fa6eab0fbd7c07431c09211da10725bf9b2a41a58b87f7cbd0841048b2c" Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.341090 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d5c47fd59-dkcb5"] Jan 21 16:16:51 crc kubenswrapper[4834]: I0121 16:16:51.351361 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d5c47fd59-dkcb5"] Jan 21 16:16:52 crc kubenswrapper[4834]: I0121 16:16:52.340625 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2ce2de-e312-4435-98e7-bd7e410a2f15" path="/var/lib/kubelet/pods/da2ce2de-e312-4435-98e7-bd7e410a2f15/volumes" Jan 21 16:16:53 crc kubenswrapper[4834]: I0121 16:16:53.366187 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:53 crc kubenswrapper[4834]: I0121 16:16:53.367292 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="ceilometer-central-agent" containerID="cri-o://3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5" gracePeriod=30 Jan 21 16:16:53 crc kubenswrapper[4834]: I0121 16:16:53.367800 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="proxy-httpd" containerID="cri-o://7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72" gracePeriod=30 Jan 21 16:16:53 crc kubenswrapper[4834]: I0121 16:16:53.367991 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="ceilometer-notification-agent" containerID="cri-o://65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023" gracePeriod=30 Jan 21 16:16:53 crc kubenswrapper[4834]: I0121 16:16:53.368037 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="sg-core" containerID="cri-o://53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419" gracePeriod=30 Jan 21 16:16:54 crc kubenswrapper[4834]: I0121 16:16:54.326736 4834 generic.go:334] "Generic (PLEG): container finished" podID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerID="7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72" exitCode=0 Jan 21 16:16:54 crc kubenswrapper[4834]: I0121 16:16:54.327000 4834 generic.go:334] "Generic (PLEG): container finished" podID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerID="53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419" exitCode=2 Jan 21 16:16:54 crc kubenswrapper[4834]: I0121 16:16:54.327008 4834 generic.go:334] "Generic (PLEG): container finished" podID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerID="3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5" exitCode=0 Jan 21 16:16:54 crc kubenswrapper[4834]: I0121 16:16:54.391456 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerDied","Data":"7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72"} Jan 21 16:16:54 crc kubenswrapper[4834]: I0121 16:16:54.391507 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerDied","Data":"53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419"} Jan 21 16:16:54 crc kubenswrapper[4834]: I0121 16:16:54.391521 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerDied","Data":"3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5"} Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.068518 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a438e3_1151_44b6_aac3_a3bc30878c99.slice/crio-conmon-65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.284863 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.386667 4834 generic.go:334] "Generic (PLEG): container finished" podID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerID="65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023" exitCode=0 Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.386724 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerDied","Data":"65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023"} Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.386761 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28a438e3-1151-44b6-aac3-a3bc30878c99","Type":"ContainerDied","Data":"d6587ec77c2eae5def97513d52e8ae201a23889d90335f9616a9590cc4f3a6b0"} Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.386786 4834 scope.go:117] "RemoveContainer" containerID="7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.386983 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.411563 4834 scope.go:117] "RemoveContainer" containerID="53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.413735 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-scripts\") pod \"28a438e3-1151-44b6-aac3-a3bc30878c99\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.413875 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-run-httpd\") pod \"28a438e3-1151-44b6-aac3-a3bc30878c99\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.413977 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-sg-core-conf-yaml\") pod \"28a438e3-1151-44b6-aac3-a3bc30878c99\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.413999 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7m8r\" (UniqueName: \"kubernetes.io/projected/28a438e3-1151-44b6-aac3-a3bc30878c99-kube-api-access-s7m8r\") pod \"28a438e3-1151-44b6-aac3-a3bc30878c99\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.414042 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-combined-ca-bundle\") pod \"28a438e3-1151-44b6-aac3-a3bc30878c99\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.414120 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-config-data\") pod \"28a438e3-1151-44b6-aac3-a3bc30878c99\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.414149 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-log-httpd\") pod \"28a438e3-1151-44b6-aac3-a3bc30878c99\" (UID: \"28a438e3-1151-44b6-aac3-a3bc30878c99\") " Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.414461 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28a438e3-1151-44b6-aac3-a3bc30878c99" (UID: "28a438e3-1151-44b6-aac3-a3bc30878c99"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.414770 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28a438e3-1151-44b6-aac3-a3bc30878c99" (UID: "28a438e3-1151-44b6-aac3-a3bc30878c99"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.415286 4834 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.415317 4834 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a438e3-1151-44b6-aac3-a3bc30878c99-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.427501 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-scripts" (OuterVolumeSpecName: "scripts") pod "28a438e3-1151-44b6-aac3-a3bc30878c99" (UID: "28a438e3-1151-44b6-aac3-a3bc30878c99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.427579 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a438e3-1151-44b6-aac3-a3bc30878c99-kube-api-access-s7m8r" (OuterVolumeSpecName: "kube-api-access-s7m8r") pod "28a438e3-1151-44b6-aac3-a3bc30878c99" (UID: "28a438e3-1151-44b6-aac3-a3bc30878c99"). InnerVolumeSpecName "kube-api-access-s7m8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.440890 4834 scope.go:117] "RemoveContainer" containerID="65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.447455 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28a438e3-1151-44b6-aac3-a3bc30878c99" (UID: "28a438e3-1151-44b6-aac3-a3bc30878c99"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.504319 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28a438e3-1151-44b6-aac3-a3bc30878c99" (UID: "28a438e3-1151-44b6-aac3-a3bc30878c99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.518065 4834 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.518103 4834 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.518117 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7m8r\" (UniqueName: \"kubernetes.io/projected/28a438e3-1151-44b6-aac3-a3bc30878c99-kube-api-access-s7m8r\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.518127 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.533168 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-config-data" (OuterVolumeSpecName: "config-data") pod "28a438e3-1151-44b6-aac3-a3bc30878c99" (UID: "28a438e3-1151-44b6-aac3-a3bc30878c99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.566731 4834 scope.go:117] "RemoveContainer" containerID="3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.588440 4834 scope.go:117] "RemoveContainer" containerID="7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72" Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.588872 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72\": container with ID starting with 7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72 not found: ID does not exist" containerID="7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.588912 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72"} err="failed to get container status \"7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72\": rpc error: code = NotFound desc = could not find container \"7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72\": container with ID starting with 7dfe63dc31cb08f24bd2d3fa6a73d3239c601df575224a4bb99d88e509130e72 not found: ID does not exist" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.588954 4834 scope.go:117] "RemoveContainer" containerID="53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419" Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.589557 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419\": container with ID starting with 53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419 not found: ID does not exist" containerID="53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.589607 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419"} err="failed to get container status \"53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419\": rpc error: code = NotFound desc = could not find container \"53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419\": container with ID starting with 53f85e8c8d879538d33c75af99c3f58214bcacf8dd54ed28ba2bda71d0293419 not found: ID does not exist" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.589644 4834 scope.go:117] "RemoveContainer" containerID="65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023" Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.590080 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023\": container with ID starting with 65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023 not found: ID does not exist" containerID="65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.590123 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023"} err="failed to get container status \"65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023\": rpc error: code = NotFound desc = could not find container \"65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023\": container with ID starting with 65f3ec4ca1745d145ce4903086cdb985bc25b554c96b34b588ab72673270c023 not found: ID does not exist" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.590148 4834 scope.go:117] "RemoveContainer" containerID="3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5" Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.590514 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5\": container with ID starting with 3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5 not found: ID does not exist" containerID="3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.590587 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5"} err="failed to get container status \"3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5\": rpc error: code = NotFound desc = could not find container \"3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5\": container with ID starting with 3a4cfe34285cf5ca3c378dd6cb45191f0826e6546fb86404679d86d4669900f5 not found: ID does not exist" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.620277 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a438e3-1151-44b6-aac3-a3bc30878c99-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.721624 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.742655 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.758289 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.758839 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="sg-core" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.758865 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="sg-core" Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.758878 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2ce2de-e312-4435-98e7-bd7e410a2f15" containerName="init" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.758886 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2ce2de-e312-4435-98e7-bd7e410a2f15" containerName="init" Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.758910 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2ce2de-e312-4435-98e7-bd7e410a2f15" containerName="dnsmasq-dns" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.758916 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2ce2de-e312-4435-98e7-bd7e410a2f15" containerName="dnsmasq-dns" Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.758947 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="proxy-httpd" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.758954 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="proxy-httpd" Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.758972 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="ceilometer-central-agent" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.758978 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="ceilometer-central-agent" Jan 21 16:16:59 crc kubenswrapper[4834]: E0121 16:16:59.758997 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="ceilometer-notification-agent" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.759003 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="ceilometer-notification-agent" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.759215 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="proxy-httpd" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.759226 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2ce2de-e312-4435-98e7-bd7e410a2f15" containerName="dnsmasq-dns" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.759233 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="sg-core" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.759251 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="ceilometer-central-agent" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.759262 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" containerName="ceilometer-notification-agent" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.761593 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.768250 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.768312 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.780048 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.810415 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.927186 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-config-data\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.927308 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94cv\" (UniqueName: \"kubernetes.io/projected/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-kube-api-access-q94cv\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.927360 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-log-httpd\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.927403 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.927995 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.928267 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-scripts\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:16:59 crc kubenswrapper[4834]: I0121 16:16:59.928583 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-run-httpd\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.031071 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-scripts\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.031181 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-run-httpd\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.031246 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-config-data\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.031278 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q94cv\" (UniqueName: \"kubernetes.io/projected/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-kube-api-access-q94cv\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.031310 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-log-httpd\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.031345 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.031417 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.031914 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-log-httpd\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.032112 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-run-httpd\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.035920 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.036819 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-scripts\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.041094 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-config-data\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.047802 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.063952 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q94cv\" (UniqueName: \"kubernetes.io/projected/9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee-kube-api-access-q94cv\") pod \"ceilometer-0\" (UID: \"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee\") " pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.084497 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.345476 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a438e3-1151-44b6-aac3-a3bc30878c99" path="/var/lib/kubelet/pods/28a438e3-1151-44b6-aac3-a3bc30878c99/volumes" Jan 21 16:17:00 crc kubenswrapper[4834]: I0121 16:17:00.625938 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:17:01 crc kubenswrapper[4834]: I0121 16:17:01.461944 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee","Type":"ContainerStarted","Data":"bb2913f0ee6df33720709ae4dba6517faa3b8749ccd290d52362748d413425f6"} Jan 21 16:17:01 crc kubenswrapper[4834]: I0121 16:17:01.552908 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 21 16:17:01 crc kubenswrapper[4834]: I0121 16:17:01.949661 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 21 16:17:02 crc kubenswrapper[4834]: I0121 16:17:02.082237 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 21 16:17:02 crc kubenswrapper[4834]: I0121 16:17:02.475304 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee","Type":"ContainerStarted","Data":"a2448cc287ded6431f39330964432ab5fa04afa1e17609f3e3fb5e598c3d2a0a"} Jan 21 16:17:03 crc kubenswrapper[4834]: I0121 16:17:03.488094 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee","Type":"ContainerStarted","Data":"2a9e970756d02785a37e3060e05512ab5f01e29ba48c76978f1360bcc5064469"} Jan 21 16:17:03 crc kubenswrapper[4834]: I0121 16:17:03.488447 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee","Type":"ContainerStarted","Data":"e7711538490c0f6fc610ad8bae861b3cf0a5277fd22156536a855e2287173c6d"} Jan 21 16:17:05 crc kubenswrapper[4834]: I0121 16:17:05.510672 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee","Type":"ContainerStarted","Data":"a569fc1a83923646dde61eaa84e1dac8e7a57f9c9a47dd9b7872f4e1134c3bd8"} Jan 21 16:17:05 crc kubenswrapper[4834]: I0121 16:17:05.511252 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:17:05 crc kubenswrapper[4834]: I0121 16:17:05.534680 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.674612134 podStartE2EDuration="6.534658346s" podCreationTimestamp="2026-01-21 16:16:59 +0000 UTC" firstStartedPulling="2026-01-21 16:17:00.648128471 +0000 UTC m=+6366.622477516" lastFinishedPulling="2026-01-21 16:17:04.508174633 +0000 UTC m=+6370.482523728" observedRunningTime="2026-01-21 16:17:05.531453365 +0000 UTC m=+6371.505802430" watchObservedRunningTime="2026-01-21 16:17:05.534658346 +0000 UTC m=+6371.509007391" Jan 21 16:17:11 crc kubenswrapper[4834]: I0121 16:17:11.133185 4834 scope.go:117] "RemoveContainer" containerID="30ac3c92c3e7d84072de5cf78d8266e904971da111ed6fdd8f0d37efcd052300" Jan 21 16:17:11 crc kubenswrapper[4834]: I0121 16:17:11.172586 4834 scope.go:117] "RemoveContainer" containerID="e7d1ef14bd51a88a076f84c81e502a139d6d1591aa60ec72f6c1f2d16bd898a1" Jan 21 16:17:11 crc kubenswrapper[4834]: I0121 16:17:11.218164 4834 scope.go:117] "RemoveContainer" containerID="9e7f8d0fb02a18b039ccdce3c4e4ff8757d163efacb7a7a3f721330a94c92a4b" Jan 21 16:17:17 crc kubenswrapper[4834]: I0121 16:17:17.114621 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:17:17 crc kubenswrapper[4834]: I0121 16:17:17.115157 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:17:17 crc kubenswrapper[4834]: I0121 16:17:17.115201 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:17:17 crc kubenswrapper[4834]: I0121 16:17:17.116099 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:17:17 crc kubenswrapper[4834]: I0121 16:17:17.116160 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" gracePeriod=600 Jan 21 16:17:17 crc kubenswrapper[4834]: E0121 16:17:17.234688 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:17:17 crc kubenswrapper[4834]: I0121 16:17:17.633780 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" exitCode=0 Jan 21 16:17:17 crc kubenswrapper[4834]: I0121 16:17:17.633864 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7"} Jan 21 16:17:17 crc kubenswrapper[4834]: I0121 16:17:17.634224 4834 scope.go:117] "RemoveContainer" containerID="32379ce27e78b0554ab6a50c272e2419fe05a1e3480f8360ebd5c3ae33b4df8b" Jan 21 16:17:17 crc kubenswrapper[4834]: I0121 16:17:17.634591 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:17:17 crc kubenswrapper[4834]: E0121 16:17:17.634849 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:17:30 crc kubenswrapper[4834]: I0121 16:17:30.090702 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:17:31 crc kubenswrapper[4834]: I0121 16:17:31.325239 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:17:31 crc kubenswrapper[4834]: E0121 16:17:31.326416 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:17:42 crc kubenswrapper[4834]: I0121 16:17:42.325112 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:17:42 crc kubenswrapper[4834]: E0121 16:17:42.326570 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.725585 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8cb58cc59-w2lbj"] Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.728093 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.732414 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.742267 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cb58cc59-w2lbj"] Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.842446 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-nb\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.842911 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-dns-svc\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.842992 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mckg8\" (UniqueName: \"kubernetes.io/projected/fdcd27ee-d137-4208-a91c-d4184d110a64-kube-api-access-mckg8\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.843017 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-config\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.843082 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-openstack-cell1\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.843209 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-sb\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.945351 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-nb\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.945467 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-dns-svc\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.945523 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mckg8\" (UniqueName: \"kubernetes.io/projected/fdcd27ee-d137-4208-a91c-d4184d110a64-kube-api-access-mckg8\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.945542 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-config\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.945577 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-openstack-cell1\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.945665 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-sb\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.946815 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-nb\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.946869 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-config\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.946967 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-sb\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.946999 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-openstack-cell1\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.947096 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-dns-svc\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:45 crc kubenswrapper[4834]: I0121 16:17:45.970889 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mckg8\" (UniqueName: \"kubernetes.io/projected/fdcd27ee-d137-4208-a91c-d4184d110a64-kube-api-access-mckg8\") pod \"dnsmasq-dns-8cb58cc59-w2lbj\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:46 crc kubenswrapper[4834]: I0121 16:17:46.049498 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:46 crc kubenswrapper[4834]: I0121 16:17:46.571739 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cb58cc59-w2lbj"] Jan 21 16:17:46 crc kubenswrapper[4834]: I0121 16:17:46.968773 4834 generic.go:334] "Generic (PLEG): container finished" podID="fdcd27ee-d137-4208-a91c-d4184d110a64" containerID="b02046bcef660ea31356c63967e5b87bafa773f731a87cd9ad8af00cc64a5d33" exitCode=0 Jan 21 16:17:46 crc kubenswrapper[4834]: I0121 16:17:46.968824 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" event={"ID":"fdcd27ee-d137-4208-a91c-d4184d110a64","Type":"ContainerDied","Data":"b02046bcef660ea31356c63967e5b87bafa773f731a87cd9ad8af00cc64a5d33"} Jan 21 16:17:46 crc kubenswrapper[4834]: I0121 16:17:46.968855 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" event={"ID":"fdcd27ee-d137-4208-a91c-d4184d110a64","Type":"ContainerStarted","Data":"a490921a8a15363b25926cc3d5323335ff300b1bde95bf25f128258d14aba378"} Jan 21 16:17:47 crc kubenswrapper[4834]: I0121 16:17:47.991334 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" event={"ID":"fdcd27ee-d137-4208-a91c-d4184d110a64","Type":"ContainerStarted","Data":"2e2f8041717b470b63fa020376ef395240838f243dd21f6b0b2b10ae8c996cf7"} Jan 21 16:17:47 crc kubenswrapper[4834]: I0121 16:17:47.991665 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:48 crc kubenswrapper[4834]: I0121 16:17:48.021171 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" podStartSLOduration=3.021141503 podStartE2EDuration="3.021141503s" podCreationTimestamp="2026-01-21 16:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:48.014737392 +0000 UTC m=+6413.989086447" watchObservedRunningTime="2026-01-21 16:17:48.021141503 +0000 UTC m=+6413.995490548" Jan 21 16:17:55 crc kubenswrapper[4834]: I0121 16:17:55.324716 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:17:55 crc kubenswrapper[4834]: E0121 16:17:55.325540 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.051120 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.128638 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787c4d466c-fgvgr"] Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.128975 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" podUID="00c04a6c-6896-452c-9864-1d7bcc774786" containerName="dnsmasq-dns" containerID="cri-o://a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7" gracePeriod=10 Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.313917 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68679f7d8c-khwr5"] Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.316602 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.354319 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68679f7d8c-khwr5"] Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.392592 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkgkb\" (UniqueName: \"kubernetes.io/projected/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-kube-api-access-jkgkb\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.392650 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-ovsdbserver-nb\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.392751 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-config\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.392779 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-openstack-cell1\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.392809 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-ovsdbserver-sb\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.393121 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-dns-svc\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.495579 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-config\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.495625 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-openstack-cell1\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.495656 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-ovsdbserver-sb\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.495727 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-dns-svc\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.495817 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkgkb\" (UniqueName: \"kubernetes.io/projected/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-kube-api-access-jkgkb\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.495837 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-ovsdbserver-nb\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.496943 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-config\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.497242 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-ovsdbserver-nb\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.497487 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-dns-svc\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.499263 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-openstack-cell1\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.499286 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-ovsdbserver-sb\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.529204 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkgkb\" (UniqueName: \"kubernetes.io/projected/5f1d6102-14fe-4d09-ad69-41f0f3405fdc-kube-api-access-jkgkb\") pod \"dnsmasq-dns-68679f7d8c-khwr5\" (UID: \"5f1d6102-14fe-4d09-ad69-41f0f3405fdc\") " pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.640453 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.795722 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.801299 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltx5s\" (UniqueName: \"kubernetes.io/projected/00c04a6c-6896-452c-9864-1d7bcc774786-kube-api-access-ltx5s\") pod \"00c04a6c-6896-452c-9864-1d7bcc774786\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.801396 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-dns-svc\") pod \"00c04a6c-6896-452c-9864-1d7bcc774786\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.801422 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-sb\") pod \"00c04a6c-6896-452c-9864-1d7bcc774786\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.801541 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-config\") pod \"00c04a6c-6896-452c-9864-1d7bcc774786\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.801586 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-nb\") pod \"00c04a6c-6896-452c-9864-1d7bcc774786\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.808408 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c04a6c-6896-452c-9864-1d7bcc774786-kube-api-access-ltx5s" (OuterVolumeSpecName: "kube-api-access-ltx5s") pod "00c04a6c-6896-452c-9864-1d7bcc774786" (UID: "00c04a6c-6896-452c-9864-1d7bcc774786"). InnerVolumeSpecName "kube-api-access-ltx5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.869956 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00c04a6c-6896-452c-9864-1d7bcc774786" (UID: "00c04a6c-6896-452c-9864-1d7bcc774786"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.888881 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-config" (OuterVolumeSpecName: "config") pod "00c04a6c-6896-452c-9864-1d7bcc774786" (UID: "00c04a6c-6896-452c-9864-1d7bcc774786"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.899079 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00c04a6c-6896-452c-9864-1d7bcc774786" (UID: "00c04a6c-6896-452c-9864-1d7bcc774786"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.903025 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00c04a6c-6896-452c-9864-1d7bcc774786" (UID: "00c04a6c-6896-452c-9864-1d7bcc774786"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.903283 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-nb\") pod \"00c04a6c-6896-452c-9864-1d7bcc774786\" (UID: \"00c04a6c-6896-452c-9864-1d7bcc774786\") " Jan 21 16:17:56 crc kubenswrapper[4834]: W0121 16:17:56.903392 4834 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/00c04a6c-6896-452c-9864-1d7bcc774786/volumes/kubernetes.io~configmap/ovsdbserver-nb Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.903417 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00c04a6c-6896-452c-9864-1d7bcc774786" (UID: "00c04a6c-6896-452c-9864-1d7bcc774786"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.903812 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltx5s\" (UniqueName: \"kubernetes.io/projected/00c04a6c-6896-452c-9864-1d7bcc774786-kube-api-access-ltx5s\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.903833 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.903843 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.903852 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:56 crc kubenswrapper[4834]: I0121 16:17:56.903860 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00c04a6c-6896-452c-9864-1d7bcc774786-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.069950 4834 generic.go:334] "Generic (PLEG): container finished" podID="00c04a6c-6896-452c-9864-1d7bcc774786" containerID="a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7" exitCode=0 Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.070004 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" event={"ID":"00c04a6c-6896-452c-9864-1d7bcc774786","Type":"ContainerDied","Data":"a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7"} Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.070040 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" event={"ID":"00c04a6c-6896-452c-9864-1d7bcc774786","Type":"ContainerDied","Data":"da2fdd868f49eec39ee8dd01a8b65363c0e99d78d789a03a26aa2c330cc11424"} Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.070064 4834 scope.go:117] "RemoveContainer" containerID="a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7" Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.070227 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-787c4d466c-fgvgr" Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.098084 4834 scope.go:117] "RemoveContainer" containerID="6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde" Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.111267 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-787c4d466c-fgvgr"] Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.122713 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-787c4d466c-fgvgr"] Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.144634 4834 scope.go:117] "RemoveContainer" containerID="a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7" Jan 21 16:17:57 crc kubenswrapper[4834]: E0121 16:17:57.145411 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7\": container with ID starting with a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7 not found: ID does not exist" containerID="a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7" Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.145464 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7"} err="failed to get container status \"a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7\": rpc error: code = NotFound desc = could not find container \"a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7\": container with ID starting with a575a7a773e53bc03b1f638ba2f5eb5bdba046f56c524bfc41c23265aa5154c7 not found: ID does not exist" Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.145492 4834 scope.go:117] "RemoveContainer" containerID="6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde" Jan 21 16:17:57 crc kubenswrapper[4834]: E0121 16:17:57.146102 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde\": container with ID starting with 6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde not found: ID does not exist" containerID="6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde" Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.146196 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde"} err="failed to get container status \"6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde\": rpc error: code = NotFound desc = could not find container \"6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde\": container with ID starting with 6913ebe99f9b4cb6f606c7709aeae29fe6385b821ae4723eab14df05ed60bfde not found: ID does not exist" Jan 21 16:17:57 crc kubenswrapper[4834]: I0121 16:17:57.156285 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68679f7d8c-khwr5"] Jan 21 16:17:58 crc kubenswrapper[4834]: I0121 16:17:58.081135 4834 generic.go:334] "Generic (PLEG): container finished" podID="5f1d6102-14fe-4d09-ad69-41f0f3405fdc" containerID="4a47c6255c1d3a3fdcbc8c692d1d70f7887587d243bc9515f45e141f67a7d873" exitCode=0 Jan 21 16:17:58 crc kubenswrapper[4834]: I0121 16:17:58.081241 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" event={"ID":"5f1d6102-14fe-4d09-ad69-41f0f3405fdc","Type":"ContainerDied","Data":"4a47c6255c1d3a3fdcbc8c692d1d70f7887587d243bc9515f45e141f67a7d873"} Jan 21 16:17:58 crc kubenswrapper[4834]: I0121 16:17:58.082542 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" event={"ID":"5f1d6102-14fe-4d09-ad69-41f0f3405fdc","Type":"ContainerStarted","Data":"8644e55844b1a132621f0d07029da074e95570ec0cc18093a3c926bea7bfb377"} Jan 21 16:17:58 crc kubenswrapper[4834]: I0121 16:17:58.339848 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c04a6c-6896-452c-9864-1d7bcc774786" path="/var/lib/kubelet/pods/00c04a6c-6896-452c-9864-1d7bcc774786/volumes" Jan 21 16:17:59 crc kubenswrapper[4834]: I0121 16:17:59.096096 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" event={"ID":"5f1d6102-14fe-4d09-ad69-41f0f3405fdc","Type":"ContainerStarted","Data":"f4e3291eacb5bacca83e863b1184993ab462359adb292ed9ae029a224b78481b"} Jan 21 16:17:59 crc kubenswrapper[4834]: I0121 16:17:59.096269 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:17:59 crc kubenswrapper[4834]: I0121 16:17:59.128213 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" podStartSLOduration=3.128190459 podStartE2EDuration="3.128190459s" podCreationTimestamp="2026-01-21 16:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:17:59.119444555 +0000 UTC m=+6425.093793610" watchObservedRunningTime="2026-01-21 16:17:59.128190459 +0000 UTC m=+6425.102539504" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.323693 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8"] Jan 21 16:18:02 crc kubenswrapper[4834]: E0121 16:18:02.324869 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c04a6c-6896-452c-9864-1d7bcc774786" containerName="dnsmasq-dns" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.324889 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c04a6c-6896-452c-9864-1d7bcc774786" containerName="dnsmasq-dns" Jan 21 16:18:02 crc kubenswrapper[4834]: E0121 16:18:02.324961 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00c04a6c-6896-452c-9864-1d7bcc774786" containerName="init" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.324969 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c04a6c-6896-452c-9864-1d7bcc774786" containerName="init" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.325217 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="00c04a6c-6896-452c-9864-1d7bcc774786" containerName="dnsmasq-dns" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.326378 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.329849 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.329883 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.331307 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.331578 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.342109 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8"] Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.385630 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.386097 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.386433 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.386851 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcknb\" (UniqueName: \"kubernetes.io/projected/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-kube-api-access-vcknb\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.387249 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.488978 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.489078 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.489150 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.489216 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.489321 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcknb\" (UniqueName: \"kubernetes.io/projected/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-kube-api-access-vcknb\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.495348 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.495417 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.496488 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.496641 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.506294 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcknb\" (UniqueName: \"kubernetes.io/projected/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-kube-api-access-vcknb\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:02 crc kubenswrapper[4834]: I0121 16:18:02.645246 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:03 crc kubenswrapper[4834]: I0121 16:18:03.221396 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8"] Jan 21 16:18:03 crc kubenswrapper[4834]: I0121 16:18:03.224387 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:18:04 crc kubenswrapper[4834]: I0121 16:18:04.153622 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" event={"ID":"5cb33b06-57dd-4368-a7cb-97c2b214f2f2","Type":"ContainerStarted","Data":"507726275beb3876ae10f795e5c029346cb5a143b6c984a7d05c7df6936af1ce"} Jan 21 16:18:06 crc kubenswrapper[4834]: I0121 16:18:06.643237 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68679f7d8c-khwr5" Jan 21 16:18:06 crc kubenswrapper[4834]: I0121 16:18:06.740825 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cb58cc59-w2lbj"] Jan 21 16:18:06 crc kubenswrapper[4834]: I0121 16:18:06.741254 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" podUID="fdcd27ee-d137-4208-a91c-d4184d110a64" containerName="dnsmasq-dns" containerID="cri-o://2e2f8041717b470b63fa020376ef395240838f243dd21f6b0b2b10ae8c996cf7" gracePeriod=10 Jan 21 16:18:07 crc kubenswrapper[4834]: I0121 16:18:07.196145 4834 generic.go:334] "Generic (PLEG): container finished" podID="fdcd27ee-d137-4208-a91c-d4184d110a64" containerID="2e2f8041717b470b63fa020376ef395240838f243dd21f6b0b2b10ae8c996cf7" exitCode=0 Jan 21 16:18:07 crc kubenswrapper[4834]: I0121 16:18:07.196222 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" event={"ID":"fdcd27ee-d137-4208-a91c-d4184d110a64","Type":"ContainerDied","Data":"2e2f8041717b470b63fa020376ef395240838f243dd21f6b0b2b10ae8c996cf7"} Jan 21 16:18:08 crc kubenswrapper[4834]: I0121 16:18:08.326223 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:18:08 crc kubenswrapper[4834]: E0121 16:18:08.327038 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:18:11 crc kubenswrapper[4834]: I0121 16:18:11.051052 4834 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" podUID="fdcd27ee-d137-4208-a91c-d4184d110a64" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.145:5353: connect: connection refused" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.122888 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.193325 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-openstack-cell1\") pod \"fdcd27ee-d137-4208-a91c-d4184d110a64\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.193394 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mckg8\" (UniqueName: \"kubernetes.io/projected/fdcd27ee-d137-4208-a91c-d4184d110a64-kube-api-access-mckg8\") pod \"fdcd27ee-d137-4208-a91c-d4184d110a64\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.193579 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-nb\") pod \"fdcd27ee-d137-4208-a91c-d4184d110a64\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.193603 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-config\") pod \"fdcd27ee-d137-4208-a91c-d4184d110a64\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.193713 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-dns-svc\") pod \"fdcd27ee-d137-4208-a91c-d4184d110a64\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.193772 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-sb\") pod \"fdcd27ee-d137-4208-a91c-d4184d110a64\" (UID: \"fdcd27ee-d137-4208-a91c-d4184d110a64\") " Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.202176 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcd27ee-d137-4208-a91c-d4184d110a64-kube-api-access-mckg8" (OuterVolumeSpecName: "kube-api-access-mckg8") pod "fdcd27ee-d137-4208-a91c-d4184d110a64" (UID: "fdcd27ee-d137-4208-a91c-d4184d110a64"). InnerVolumeSpecName "kube-api-access-mckg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.250470 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "fdcd27ee-d137-4208-a91c-d4184d110a64" (UID: "fdcd27ee-d137-4208-a91c-d4184d110a64"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.252748 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdcd27ee-d137-4208-a91c-d4184d110a64" (UID: "fdcd27ee-d137-4208-a91c-d4184d110a64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.254540 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-config" (OuterVolumeSpecName: "config") pod "fdcd27ee-d137-4208-a91c-d4184d110a64" (UID: "fdcd27ee-d137-4208-a91c-d4184d110a64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.255467 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdcd27ee-d137-4208-a91c-d4184d110a64" (UID: "fdcd27ee-d137-4208-a91c-d4184d110a64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.257921 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdcd27ee-d137-4208-a91c-d4184d110a64" (UID: "fdcd27ee-d137-4208-a91c-d4184d110a64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.273752 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" event={"ID":"5cb33b06-57dd-4368-a7cb-97c2b214f2f2","Type":"ContainerStarted","Data":"8513c143ab821fbdec34836994141bc601cec5264a102fe2f6f6de6075d1e5c4"} Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.275828 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" event={"ID":"fdcd27ee-d137-4208-a91c-d4184d110a64","Type":"ContainerDied","Data":"a490921a8a15363b25926cc3d5323335ff300b1bde95bf25f128258d14aba378"} Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.275917 4834 scope.go:117] "RemoveContainer" containerID="2e2f8041717b470b63fa020376ef395240838f243dd21f6b0b2b10ae8c996cf7" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.275872 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cb58cc59-w2lbj" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.296392 4834 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.296435 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mckg8\" (UniqueName: \"kubernetes.io/projected/fdcd27ee-d137-4208-a91c-d4184d110a64-kube-api-access-mckg8\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.296448 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.296462 4834 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.296473 4834 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.296483 4834 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdcd27ee-d137-4208-a91c-d4184d110a64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.297712 4834 scope.go:117] "RemoveContainer" containerID="b02046bcef660ea31356c63967e5b87bafa773f731a87cd9ad8af00cc64a5d33" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.307690 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" podStartSLOduration=1.702075607 podStartE2EDuration="11.307663603s" podCreationTimestamp="2026-01-21 16:18:02 +0000 UTC" firstStartedPulling="2026-01-21 16:18:03.224134506 +0000 UTC m=+6429.198483551" lastFinishedPulling="2026-01-21 16:18:12.829722502 +0000 UTC m=+6438.804071547" observedRunningTime="2026-01-21 16:18:13.292737506 +0000 UTC m=+6439.267086551" watchObservedRunningTime="2026-01-21 16:18:13.307663603 +0000 UTC m=+6439.282012668" Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.324373 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cb58cc59-w2lbj"] Jan 21 16:18:13 crc kubenswrapper[4834]: I0121 16:18:13.333140 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8cb58cc59-w2lbj"] Jan 21 16:18:14 crc kubenswrapper[4834]: I0121 16:18:14.353787 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdcd27ee-d137-4208-a91c-d4184d110a64" path="/var/lib/kubelet/pods/fdcd27ee-d137-4208-a91c-d4184d110a64/volumes" Jan 21 16:18:22 crc kubenswrapper[4834]: I0121 16:18:22.325817 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:18:22 crc kubenswrapper[4834]: E0121 16:18:22.326873 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:18:26 crc kubenswrapper[4834]: I0121 16:18:26.405267 4834 generic.go:334] "Generic (PLEG): container finished" podID="5cb33b06-57dd-4368-a7cb-97c2b214f2f2" containerID="8513c143ab821fbdec34836994141bc601cec5264a102fe2f6f6de6075d1e5c4" exitCode=0 Jan 21 16:18:26 crc kubenswrapper[4834]: I0121 16:18:26.405481 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" event={"ID":"5cb33b06-57dd-4368-a7cb-97c2b214f2f2","Type":"ContainerDied","Data":"8513c143ab821fbdec34836994141bc601cec5264a102fe2f6f6de6075d1e5c4"} Jan 21 16:18:27 crc kubenswrapper[4834]: I0121 16:18:27.960773 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.048616 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ssh-key-openstack-cell1\") pod \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.048712 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-pre-adoption-validation-combined-ca-bundle\") pod \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.048776 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ceph\") pod \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.048863 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcknb\" (UniqueName: \"kubernetes.io/projected/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-kube-api-access-vcknb\") pod \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.048960 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-inventory\") pod \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\" (UID: \"5cb33b06-57dd-4368-a7cb-97c2b214f2f2\") " Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.058826 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ceph" (OuterVolumeSpecName: "ceph") pod "5cb33b06-57dd-4368-a7cb-97c2b214f2f2" (UID: "5cb33b06-57dd-4368-a7cb-97c2b214f2f2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.059148 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-kube-api-access-vcknb" (OuterVolumeSpecName: "kube-api-access-vcknb") pod "5cb33b06-57dd-4368-a7cb-97c2b214f2f2" (UID: "5cb33b06-57dd-4368-a7cb-97c2b214f2f2"). InnerVolumeSpecName "kube-api-access-vcknb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.059242 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "5cb33b06-57dd-4368-a7cb-97c2b214f2f2" (UID: "5cb33b06-57dd-4368-a7cb-97c2b214f2f2"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.079647 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "5cb33b06-57dd-4368-a7cb-97c2b214f2f2" (UID: "5cb33b06-57dd-4368-a7cb-97c2b214f2f2"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.081769 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-inventory" (OuterVolumeSpecName: "inventory") pod "5cb33b06-57dd-4368-a7cb-97c2b214f2f2" (UID: "5cb33b06-57dd-4368-a7cb-97c2b214f2f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.167598 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.167650 4834 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.167664 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.167676 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcknb\" (UniqueName: \"kubernetes.io/projected/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-kube-api-access-vcknb\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.167688 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cb33b06-57dd-4368-a7cb-97c2b214f2f2-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.430108 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" event={"ID":"5cb33b06-57dd-4368-a7cb-97c2b214f2f2","Type":"ContainerDied","Data":"507726275beb3876ae10f795e5c029346cb5a143b6c984a7d05c7df6936af1ce"} Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.430148 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="507726275beb3876ae10f795e5c029346cb5a143b6c984a7d05c7df6936af1ce" Jan 21 16:18:28 crc kubenswrapper[4834]: I0121 16:18:28.430476 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.428245 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62"] Jan 21 16:18:29 crc kubenswrapper[4834]: E0121 16:18:29.429297 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcd27ee-d137-4208-a91c-d4184d110a64" containerName="init" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.429319 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcd27ee-d137-4208-a91c-d4184d110a64" containerName="init" Jan 21 16:18:29 crc kubenswrapper[4834]: E0121 16:18:29.429374 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb33b06-57dd-4368-a7cb-97c2b214f2f2" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.429388 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb33b06-57dd-4368-a7cb-97c2b214f2f2" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 21 16:18:29 crc kubenswrapper[4834]: E0121 16:18:29.429418 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcd27ee-d137-4208-a91c-d4184d110a64" containerName="dnsmasq-dns" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.429427 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcd27ee-d137-4208-a91c-d4184d110a64" containerName="dnsmasq-dns" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.429719 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb33b06-57dd-4368-a7cb-97c2b214f2f2" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.429745 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcd27ee-d137-4208-a91c-d4184d110a64" containerName="dnsmasq-dns" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.431340 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.438619 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.438881 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.438968 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.444714 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.446874 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62"] Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.496203 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.496471 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.496553 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.496795 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.496847 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmz8q\" (UniqueName: \"kubernetes.io/projected/9d206bcf-791e-4e8e-bddf-faf2365abf8c-kube-api-access-wmz8q\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.598975 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.599100 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.599132 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.599251 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.599291 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmz8q\" (UniqueName: \"kubernetes.io/projected/9d206bcf-791e-4e8e-bddf-faf2365abf8c-kube-api-access-wmz8q\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.605633 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.606633 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.607625 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.610774 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.616903 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmz8q\" (UniqueName: \"kubernetes.io/projected/9d206bcf-791e-4e8e-bddf-faf2365abf8c-kube-api-access-wmz8q\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:29 crc kubenswrapper[4834]: I0121 16:18:29.753455 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:18:30 crc kubenswrapper[4834]: I0121 16:18:30.319242 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62"] Jan 21 16:18:30 crc kubenswrapper[4834]: I0121 16:18:30.457221 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" event={"ID":"9d206bcf-791e-4e8e-bddf-faf2365abf8c","Type":"ContainerStarted","Data":"768277a2a0fed17914260b427c384023e522ee25bc9428b7399a13c1ebddbb0e"} Jan 21 16:18:31 crc kubenswrapper[4834]: I0121 16:18:31.476840 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" event={"ID":"9d206bcf-791e-4e8e-bddf-faf2365abf8c","Type":"ContainerStarted","Data":"1d2d155e389cbc6166d8ced4c8b8cf81d7d1439d921efb8d71b3948fc1fdb597"} Jan 21 16:18:31 crc kubenswrapper[4834]: I0121 16:18:31.510807 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" podStartSLOduration=2.104798142 podStartE2EDuration="2.510783931s" podCreationTimestamp="2026-01-21 16:18:29 +0000 UTC" firstStartedPulling="2026-01-21 16:18:30.326519019 +0000 UTC m=+6456.300868064" lastFinishedPulling="2026-01-21 16:18:30.732504768 +0000 UTC m=+6456.706853853" observedRunningTime="2026-01-21 16:18:31.498855628 +0000 UTC m=+6457.473204693" watchObservedRunningTime="2026-01-21 16:18:31.510783931 +0000 UTC m=+6457.485133006" Jan 21 16:18:34 crc kubenswrapper[4834]: I0121 16:18:34.332211 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:18:34 crc kubenswrapper[4834]: E0121 16:18:34.333086 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:18:49 crc kubenswrapper[4834]: I0121 16:18:49.325316 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:18:49 crc kubenswrapper[4834]: E0121 16:18:49.326215 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:19:01 crc kubenswrapper[4834]: I0121 16:19:01.325117 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:19:01 crc kubenswrapper[4834]: E0121 16:19:01.325967 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:19:12 crc kubenswrapper[4834]: I0121 16:19:12.324704 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:19:12 crc kubenswrapper[4834]: E0121 16:19:12.325479 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:19:12 crc kubenswrapper[4834]: I0121 16:19:12.781157 4834 scope.go:117] "RemoveContainer" containerID="f334e13796776a405d7b41c1c64bc7237ef3672440bbb3a1de1da93d69a8713b" Jan 21 16:19:12 crc kubenswrapper[4834]: I0121 16:19:12.805296 4834 scope.go:117] "RemoveContainer" containerID="ca5c3f15cee86c56029b8a65de022dfec1c6d903bf126fa9ad348a526be4fbbe" Jan 21 16:19:13 crc kubenswrapper[4834]: I0121 16:19:13.032100 4834 scope.go:117] "RemoveContainer" containerID="f717a62a5a73125c73be144d418ee5bdc91381ab1b16df06a342385bd203a91f" Jan 21 16:19:13 crc kubenswrapper[4834]: I0121 16:19:13.052406 4834 scope.go:117] "RemoveContainer" containerID="01917efb5938ab27671e453452ea40599ed205712915bae1c0b6db63d8a46014" Jan 21 16:19:23 crc kubenswrapper[4834]: I0121 16:19:23.325436 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:19:23 crc kubenswrapper[4834]: E0121 16:19:23.326308 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.492083 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chljk"] Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.496507 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.511653 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chljk"] Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.610562 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26vlt\" (UniqueName: \"kubernetes.io/projected/fd9f785f-a4a2-4e89-876b-b4f1a000c436-kube-api-access-26vlt\") pod \"redhat-operators-chljk\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.610638 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-utilities\") pod \"redhat-operators-chljk\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.610729 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-catalog-content\") pod \"redhat-operators-chljk\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.712794 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26vlt\" (UniqueName: \"kubernetes.io/projected/fd9f785f-a4a2-4e89-876b-b4f1a000c436-kube-api-access-26vlt\") pod \"redhat-operators-chljk\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.712848 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-utilities\") pod \"redhat-operators-chljk\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.712886 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-catalog-content\") pod \"redhat-operators-chljk\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.713503 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-catalog-content\") pod \"redhat-operators-chljk\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.713668 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-utilities\") pod \"redhat-operators-chljk\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.732869 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26vlt\" (UniqueName: \"kubernetes.io/projected/fd9f785f-a4a2-4e89-876b-b4f1a000c436-kube-api-access-26vlt\") pod \"redhat-operators-chljk\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:25 crc kubenswrapper[4834]: I0121 16:19:25.826786 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:26 crc kubenswrapper[4834]: I0121 16:19:26.297316 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chljk"] Jan 21 16:19:27 crc kubenswrapper[4834]: I0121 16:19:27.056697 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-h6sx4"] Jan 21 16:19:27 crc kubenswrapper[4834]: I0121 16:19:27.059859 4834 generic.go:334] "Generic (PLEG): container finished" podID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerID="e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d" exitCode=0 Jan 21 16:19:27 crc kubenswrapper[4834]: I0121 16:19:27.059894 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chljk" event={"ID":"fd9f785f-a4a2-4e89-876b-b4f1a000c436","Type":"ContainerDied","Data":"e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d"} Jan 21 16:19:27 crc kubenswrapper[4834]: I0121 16:19:27.059917 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chljk" event={"ID":"fd9f785f-a4a2-4e89-876b-b4f1a000c436","Type":"ContainerStarted","Data":"1edb72f99c4d3a0263e62080f04a40de81fb4c2790f6511da2a1ea90e690af8f"} Jan 21 16:19:27 crc kubenswrapper[4834]: I0121 16:19:27.067677 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-h6sx4"] Jan 21 16:19:28 crc kubenswrapper[4834]: I0121 16:19:28.390981 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="267a7bbb-4e41-49ee-89c2-3e43db4ca52c" path="/var/lib/kubelet/pods/267a7bbb-4e41-49ee-89c2-3e43db4ca52c/volumes" Jan 21 16:19:29 crc kubenswrapper[4834]: I0121 16:19:29.036035 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-cdc0-account-create-update-9dqlg"] Jan 21 16:19:29 crc kubenswrapper[4834]: I0121 16:19:29.044996 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-cdc0-account-create-update-9dqlg"] Jan 21 16:19:29 crc kubenswrapper[4834]: I0121 16:19:29.086011 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chljk" event={"ID":"fd9f785f-a4a2-4e89-876b-b4f1a000c436","Type":"ContainerStarted","Data":"79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc"} Jan 21 16:19:30 crc kubenswrapper[4834]: I0121 16:19:30.359337 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59880e0f-faf1-4ebf-873c-fe4782233147" path="/var/lib/kubelet/pods/59880e0f-faf1-4ebf-873c-fe4782233147/volumes" Jan 21 16:19:33 crc kubenswrapper[4834]: I0121 16:19:33.124218 4834 generic.go:334] "Generic (PLEG): container finished" podID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerID="79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc" exitCode=0 Jan 21 16:19:33 crc kubenswrapper[4834]: I0121 16:19:33.124289 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chljk" event={"ID":"fd9f785f-a4a2-4e89-876b-b4f1a000c436","Type":"ContainerDied","Data":"79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc"} Jan 21 16:19:34 crc kubenswrapper[4834]: I0121 16:19:34.141082 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chljk" event={"ID":"fd9f785f-a4a2-4e89-876b-b4f1a000c436","Type":"ContainerStarted","Data":"d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7"} Jan 21 16:19:34 crc kubenswrapper[4834]: I0121 16:19:34.166281 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chljk" podStartSLOduration=2.680669915 podStartE2EDuration="9.166249965s" podCreationTimestamp="2026-01-21 16:19:25 +0000 UTC" firstStartedPulling="2026-01-21 16:19:27.062430902 +0000 UTC m=+6513.036779947" lastFinishedPulling="2026-01-21 16:19:33.548010952 +0000 UTC m=+6519.522359997" observedRunningTime="2026-01-21 16:19:34.160519686 +0000 UTC m=+6520.134868741" watchObservedRunningTime="2026-01-21 16:19:34.166249965 +0000 UTC m=+6520.140599010" Jan 21 16:19:35 crc kubenswrapper[4834]: I0121 16:19:35.031712 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-gs4pk"] Jan 21 16:19:35 crc kubenswrapper[4834]: I0121 16:19:35.042154 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-gs4pk"] Jan 21 16:19:35 crc kubenswrapper[4834]: I0121 16:19:35.827556 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:35 crc kubenswrapper[4834]: I0121 16:19:35.827982 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:36 crc kubenswrapper[4834]: I0121 16:19:36.043704 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-8947-account-create-update-9q7zv"] Jan 21 16:19:36 crc kubenswrapper[4834]: I0121 16:19:36.060105 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-8947-account-create-update-9q7zv"] Jan 21 16:19:36 crc kubenswrapper[4834]: I0121 16:19:36.339858 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c869cb2-7419-4c97-b877-02533151d2b6" path="/var/lib/kubelet/pods/3c869cb2-7419-4c97-b877-02533151d2b6/volumes" Jan 21 16:19:36 crc kubenswrapper[4834]: I0121 16:19:36.341593 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d096b2-0fbc-4b87-8a6f-cb77217ced9d" path="/var/lib/kubelet/pods/47d096b2-0fbc-4b87-8a6f-cb77217ced9d/volumes" Jan 21 16:19:36 crc kubenswrapper[4834]: I0121 16:19:36.878954 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chljk" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerName="registry-server" probeResult="failure" output=< Jan 21 16:19:36 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 16:19:36 crc kubenswrapper[4834]: > Jan 21 16:19:37 crc kubenswrapper[4834]: I0121 16:19:37.325648 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:19:37 crc kubenswrapper[4834]: E0121 16:19:37.326247 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:19:45 crc kubenswrapper[4834]: I0121 16:19:45.877379 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:45 crc kubenswrapper[4834]: I0121 16:19:45.938089 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:46 crc kubenswrapper[4834]: I0121 16:19:46.121818 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chljk"] Jan 21 16:19:47 crc kubenswrapper[4834]: I0121 16:19:47.270384 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chljk" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerName="registry-server" containerID="cri-o://d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7" gracePeriod=2 Jan 21 16:19:47 crc kubenswrapper[4834]: I0121 16:19:47.842201 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:47 crc kubenswrapper[4834]: I0121 16:19:47.936050 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26vlt\" (UniqueName: \"kubernetes.io/projected/fd9f785f-a4a2-4e89-876b-b4f1a000c436-kube-api-access-26vlt\") pod \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " Jan 21 16:19:47 crc kubenswrapper[4834]: I0121 16:19:47.936396 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-utilities\") pod \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " Jan 21 16:19:47 crc kubenswrapper[4834]: I0121 16:19:47.936552 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-catalog-content\") pod \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\" (UID: \"fd9f785f-a4a2-4e89-876b-b4f1a000c436\") " Jan 21 16:19:47 crc kubenswrapper[4834]: I0121 16:19:47.937564 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-utilities" (OuterVolumeSpecName: "utilities") pod "fd9f785f-a4a2-4e89-876b-b4f1a000c436" (UID: "fd9f785f-a4a2-4e89-876b-b4f1a000c436"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:19:47 crc kubenswrapper[4834]: I0121 16:19:47.943205 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9f785f-a4a2-4e89-876b-b4f1a000c436-kube-api-access-26vlt" (OuterVolumeSpecName: "kube-api-access-26vlt") pod "fd9f785f-a4a2-4e89-876b-b4f1a000c436" (UID: "fd9f785f-a4a2-4e89-876b-b4f1a000c436"). InnerVolumeSpecName "kube-api-access-26vlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:47 crc kubenswrapper[4834]: I0121 16:19:47.947026 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26vlt\" (UniqueName: \"kubernetes.io/projected/fd9f785f-a4a2-4e89-876b-b4f1a000c436-kube-api-access-26vlt\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:47 crc kubenswrapper[4834]: I0121 16:19:47.947053 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.052843 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd9f785f-a4a2-4e89-876b-b4f1a000c436" (UID: "fd9f785f-a4a2-4e89-876b-b4f1a000c436"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.056062 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9f785f-a4a2-4e89-876b-b4f1a000c436-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.281319 4834 generic.go:334] "Generic (PLEG): container finished" podID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerID="d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7" exitCode=0 Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.281373 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chljk" event={"ID":"fd9f785f-a4a2-4e89-876b-b4f1a000c436","Type":"ContainerDied","Data":"d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7"} Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.281408 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chljk" event={"ID":"fd9f785f-a4a2-4e89-876b-b4f1a000c436","Type":"ContainerDied","Data":"1edb72f99c4d3a0263e62080f04a40de81fb4c2790f6511da2a1ea90e690af8f"} Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.281424 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chljk" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.281432 4834 scope.go:117] "RemoveContainer" containerID="d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.303769 4834 scope.go:117] "RemoveContainer" containerID="79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.342061 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chljk"] Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.342101 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chljk"] Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.350042 4834 scope.go:117] "RemoveContainer" containerID="e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.389053 4834 scope.go:117] "RemoveContainer" containerID="d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7" Jan 21 16:19:48 crc kubenswrapper[4834]: E0121 16:19:48.389483 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7\": container with ID starting with d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7 not found: ID does not exist" containerID="d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.389553 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7"} err="failed to get container status \"d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7\": rpc error: code = NotFound desc = could not find container \"d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7\": container with ID starting with d42bb380b86838dab6060f302856833c7816041a7c5868e21dc5f3dfecfcbbd7 not found: ID does not exist" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.389588 4834 scope.go:117] "RemoveContainer" containerID="79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc" Jan 21 16:19:48 crc kubenswrapper[4834]: E0121 16:19:48.389943 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc\": container with ID starting with 79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc not found: ID does not exist" containerID="79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.389984 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc"} err="failed to get container status \"79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc\": rpc error: code = NotFound desc = could not find container \"79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc\": container with ID starting with 79b18379f0e21b214a00314ea2afd0d77335d234e87ebf46ee108e408e9ea6cc not found: ID does not exist" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.390008 4834 scope.go:117] "RemoveContainer" containerID="e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d" Jan 21 16:19:48 crc kubenswrapper[4834]: E0121 16:19:48.390240 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d\": container with ID starting with e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d not found: ID does not exist" containerID="e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d" Jan 21 16:19:48 crc kubenswrapper[4834]: I0121 16:19:48.390260 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d"} err="failed to get container status \"e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d\": rpc error: code = NotFound desc = could not find container \"e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d\": container with ID starting with e66bd71b9099a3ebff6fbafec63aa53cdea36d1a4f09f7370b08bb79beab045d not found: ID does not exist" Jan 21 16:19:50 crc kubenswrapper[4834]: I0121 16:19:50.336393 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" path="/var/lib/kubelet/pods/fd9f785f-a4a2-4e89-876b-b4f1a000c436/volumes" Jan 21 16:19:51 crc kubenswrapper[4834]: I0121 16:19:51.324474 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:19:51 crc kubenswrapper[4834]: E0121 16:19:51.325068 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.806048 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2nzvt"] Jan 21 16:19:57 crc kubenswrapper[4834]: E0121 16:19:57.807102 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerName="extract-content" Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.807116 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerName="extract-content" Jan 21 16:19:57 crc kubenswrapper[4834]: E0121 16:19:57.807143 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerName="extract-utilities" Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.807150 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerName="extract-utilities" Jan 21 16:19:57 crc kubenswrapper[4834]: E0121 16:19:57.807163 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerName="registry-server" Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.807170 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerName="registry-server" Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.807385 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9f785f-a4a2-4e89-876b-b4f1a000c436" containerName="registry-server" Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.809051 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.819943 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nzvt"] Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.991042 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdtf8\" (UniqueName: \"kubernetes.io/projected/ad6873db-3783-4cda-b552-b5b3694d23e8-kube-api-access-jdtf8\") pod \"community-operators-2nzvt\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.991136 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-catalog-content\") pod \"community-operators-2nzvt\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:57 crc kubenswrapper[4834]: I0121 16:19:57.991359 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-utilities\") pod \"community-operators-2nzvt\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:58 crc kubenswrapper[4834]: I0121 16:19:58.098012 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdtf8\" (UniqueName: \"kubernetes.io/projected/ad6873db-3783-4cda-b552-b5b3694d23e8-kube-api-access-jdtf8\") pod \"community-operators-2nzvt\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:58 crc kubenswrapper[4834]: I0121 16:19:58.098197 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-catalog-content\") pod \"community-operators-2nzvt\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:58 crc kubenswrapper[4834]: I0121 16:19:58.098324 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-utilities\") pod \"community-operators-2nzvt\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:58 crc kubenswrapper[4834]: I0121 16:19:58.099255 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-utilities\") pod \"community-operators-2nzvt\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:58 crc kubenswrapper[4834]: I0121 16:19:58.099397 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-catalog-content\") pod \"community-operators-2nzvt\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:58 crc kubenswrapper[4834]: I0121 16:19:58.138117 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdtf8\" (UniqueName: \"kubernetes.io/projected/ad6873db-3783-4cda-b552-b5b3694d23e8-kube-api-access-jdtf8\") pod \"community-operators-2nzvt\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:58 crc kubenswrapper[4834]: I0121 16:19:58.434358 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:19:58 crc kubenswrapper[4834]: I0121 16:19:58.957151 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nzvt"] Jan 21 16:19:59 crc kubenswrapper[4834]: I0121 16:19:59.399448 4834 generic.go:334] "Generic (PLEG): container finished" podID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerID="c6a2e13d8818642b3d24bf3cf6b7c9d9fb190d996c6941703bfb76b01e43e226" exitCode=0 Jan 21 16:19:59 crc kubenswrapper[4834]: I0121 16:19:59.399594 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nzvt" event={"ID":"ad6873db-3783-4cda-b552-b5b3694d23e8","Type":"ContainerDied","Data":"c6a2e13d8818642b3d24bf3cf6b7c9d9fb190d996c6941703bfb76b01e43e226"} Jan 21 16:19:59 crc kubenswrapper[4834]: I0121 16:19:59.399719 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nzvt" event={"ID":"ad6873db-3783-4cda-b552-b5b3694d23e8","Type":"ContainerStarted","Data":"11db735288b9239e566da7fdb07f531f8fd752910edc0f318d7cadfac5ed71e1"} Jan 21 16:20:00 crc kubenswrapper[4834]: I0121 16:20:00.409477 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nzvt" event={"ID":"ad6873db-3783-4cda-b552-b5b3694d23e8","Type":"ContainerStarted","Data":"368f5062e40064ae888d67f279448c142ce8f61083c28fda18eccb9eacc9e8fb"} Jan 21 16:20:01 crc kubenswrapper[4834]: I0121 16:20:01.424306 4834 generic.go:334] "Generic (PLEG): container finished" podID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerID="368f5062e40064ae888d67f279448c142ce8f61083c28fda18eccb9eacc9e8fb" exitCode=0 Jan 21 16:20:01 crc kubenswrapper[4834]: I0121 16:20:01.424392 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nzvt" event={"ID":"ad6873db-3783-4cda-b552-b5b3694d23e8","Type":"ContainerDied","Data":"368f5062e40064ae888d67f279448c142ce8f61083c28fda18eccb9eacc9e8fb"} Jan 21 16:20:02 crc kubenswrapper[4834]: I0121 16:20:02.325175 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:20:02 crc kubenswrapper[4834]: E0121 16:20:02.325833 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:20:02 crc kubenswrapper[4834]: I0121 16:20:02.440114 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nzvt" event={"ID":"ad6873db-3783-4cda-b552-b5b3694d23e8","Type":"ContainerStarted","Data":"2ac93ee6916bbcfa4e0c596919b6c3e0b600bf026ef637d85c0cf25c829d8828"} Jan 21 16:20:02 crc kubenswrapper[4834]: I0121 16:20:02.463755 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2nzvt" podStartSLOduration=3.010835225 podStartE2EDuration="5.463734389s" podCreationTimestamp="2026-01-21 16:19:57 +0000 UTC" firstStartedPulling="2026-01-21 16:19:59.401911344 +0000 UTC m=+6545.376260389" lastFinishedPulling="2026-01-21 16:20:01.854810508 +0000 UTC m=+6547.829159553" observedRunningTime="2026-01-21 16:20:02.459774646 +0000 UTC m=+6548.434123701" watchObservedRunningTime="2026-01-21 16:20:02.463734389 +0000 UTC m=+6548.438083434" Jan 21 16:20:08 crc kubenswrapper[4834]: I0121 16:20:08.434858 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:20:08 crc kubenswrapper[4834]: I0121 16:20:08.435410 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:20:08 crc kubenswrapper[4834]: I0121 16:20:08.491233 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:20:08 crc kubenswrapper[4834]: I0121 16:20:08.545671 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:20:08 crc kubenswrapper[4834]: I0121 16:20:08.726389 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nzvt"] Jan 21 16:20:10 crc kubenswrapper[4834]: I0121 16:20:10.508592 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2nzvt" podUID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerName="registry-server" containerID="cri-o://2ac93ee6916bbcfa4e0c596919b6c3e0b600bf026ef637d85c0cf25c829d8828" gracePeriod=2 Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.539564 4834 generic.go:334] "Generic (PLEG): container finished" podID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerID="2ac93ee6916bbcfa4e0c596919b6c3e0b600bf026ef637d85c0cf25c829d8828" exitCode=0 Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.540075 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nzvt" event={"ID":"ad6873db-3783-4cda-b552-b5b3694d23e8","Type":"ContainerDied","Data":"2ac93ee6916bbcfa4e0c596919b6c3e0b600bf026ef637d85c0cf25c829d8828"} Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.698205 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.815756 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-utilities\") pod \"ad6873db-3783-4cda-b552-b5b3694d23e8\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.815882 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-catalog-content\") pod \"ad6873db-3783-4cda-b552-b5b3694d23e8\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.816032 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdtf8\" (UniqueName: \"kubernetes.io/projected/ad6873db-3783-4cda-b552-b5b3694d23e8-kube-api-access-jdtf8\") pod \"ad6873db-3783-4cda-b552-b5b3694d23e8\" (UID: \"ad6873db-3783-4cda-b552-b5b3694d23e8\") " Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.817119 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-utilities" (OuterVolumeSpecName: "utilities") pod "ad6873db-3783-4cda-b552-b5b3694d23e8" (UID: "ad6873db-3783-4cda-b552-b5b3694d23e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.822703 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6873db-3783-4cda-b552-b5b3694d23e8-kube-api-access-jdtf8" (OuterVolumeSpecName: "kube-api-access-jdtf8") pod "ad6873db-3783-4cda-b552-b5b3694d23e8" (UID: "ad6873db-3783-4cda-b552-b5b3694d23e8"). InnerVolumeSpecName "kube-api-access-jdtf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.871495 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad6873db-3783-4cda-b552-b5b3694d23e8" (UID: "ad6873db-3783-4cda-b552-b5b3694d23e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.919182 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdtf8\" (UniqueName: \"kubernetes.io/projected/ad6873db-3783-4cda-b552-b5b3694d23e8-kube-api-access-jdtf8\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.919219 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:11 crc kubenswrapper[4834]: I0121 16:20:11.919230 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad6873db-3783-4cda-b552-b5b3694d23e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:12 crc kubenswrapper[4834]: I0121 16:20:12.552881 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nzvt" event={"ID":"ad6873db-3783-4cda-b552-b5b3694d23e8","Type":"ContainerDied","Data":"11db735288b9239e566da7fdb07f531f8fd752910edc0f318d7cadfac5ed71e1"} Jan 21 16:20:12 crc kubenswrapper[4834]: I0121 16:20:12.553296 4834 scope.go:117] "RemoveContainer" containerID="2ac93ee6916bbcfa4e0c596919b6c3e0b600bf026ef637d85c0cf25c829d8828" Jan 21 16:20:12 crc kubenswrapper[4834]: I0121 16:20:12.553111 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nzvt" Jan 21 16:20:12 crc kubenswrapper[4834]: I0121 16:20:12.590493 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nzvt"] Jan 21 16:20:12 crc kubenswrapper[4834]: I0121 16:20:12.600873 4834 scope.go:117] "RemoveContainer" containerID="368f5062e40064ae888d67f279448c142ce8f61083c28fda18eccb9eacc9e8fb" Jan 21 16:20:12 crc kubenswrapper[4834]: I0121 16:20:12.605306 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2nzvt"] Jan 21 16:20:12 crc kubenswrapper[4834]: I0121 16:20:12.624659 4834 scope.go:117] "RemoveContainer" containerID="c6a2e13d8818642b3d24bf3cf6b7c9d9fb190d996c6941703bfb76b01e43e226" Jan 21 16:20:13 crc kubenswrapper[4834]: I0121 16:20:13.336466 4834 scope.go:117] "RemoveContainer" containerID="24aaa50e53620921258aee6e62d15fe59150d20a1358882edbc5773f4c1fd305" Jan 21 16:20:13 crc kubenswrapper[4834]: I0121 16:20:13.373951 4834 scope.go:117] "RemoveContainer" containerID="099b287e41765628facfe23daf5aa366db3153e5f7b6860e9f61cf04911ea383" Jan 21 16:20:13 crc kubenswrapper[4834]: I0121 16:20:13.437721 4834 scope.go:117] "RemoveContainer" containerID="e75b8c6b826eb55cc41ffdaa3b317b76fe949daa820c67cfa52f872700bb156a" Jan 21 16:20:13 crc kubenswrapper[4834]: I0121 16:20:13.466623 4834 scope.go:117] "RemoveContainer" containerID="910ba4b8cdc2537e07fe4133312bdc68f34449a3e9f03681824303e6c1d5c474" Jan 21 16:20:14 crc kubenswrapper[4834]: I0121 16:20:14.341365 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad6873db-3783-4cda-b552-b5b3694d23e8" path="/var/lib/kubelet/pods/ad6873db-3783-4cda-b552-b5b3694d23e8/volumes" Jan 21 16:20:15 crc kubenswrapper[4834]: I0121 16:20:15.324891 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:20:15 crc kubenswrapper[4834]: E0121 16:20:15.326039 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:20:24 crc kubenswrapper[4834]: I0121 16:20:24.049662 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-sc5zg"] Jan 21 16:20:24 crc kubenswrapper[4834]: I0121 16:20:24.064740 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-sc5zg"] Jan 21 16:20:24 crc kubenswrapper[4834]: I0121 16:20:24.339131 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5fa72e-0703-40bf-9041-4b3b21899793" path="/var/lib/kubelet/pods/af5fa72e-0703-40bf-9041-4b3b21899793/volumes" Jan 21 16:20:29 crc kubenswrapper[4834]: I0121 16:20:29.325527 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:20:29 crc kubenswrapper[4834]: E0121 16:20:29.326134 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:20:43 crc kubenswrapper[4834]: I0121 16:20:43.325442 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:20:43 crc kubenswrapper[4834]: E0121 16:20:43.327687 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:20:55 crc kubenswrapper[4834]: I0121 16:20:55.325566 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:20:55 crc kubenswrapper[4834]: E0121 16:20:55.326314 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:21:08 crc kubenswrapper[4834]: I0121 16:21:08.326564 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:21:08 crc kubenswrapper[4834]: E0121 16:21:08.327558 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:21:13 crc kubenswrapper[4834]: I0121 16:21:13.643833 4834 scope.go:117] "RemoveContainer" containerID="1b806932c50ec5b289ce53745d4e2ca5e2aa36f53445ef99326f711d10e5ac8c" Jan 21 16:21:13 crc kubenswrapper[4834]: I0121 16:21:13.677977 4834 scope.go:117] "RemoveContainer" containerID="027c83181be5d21f438c1ea3bc0a003d25e6d75fac6d2814f1c8e2817a1b9564" Jan 21 16:21:21 crc kubenswrapper[4834]: I0121 16:21:21.324964 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:21:21 crc kubenswrapper[4834]: E0121 16:21:21.325747 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:21:34 crc kubenswrapper[4834]: I0121 16:21:34.331947 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:21:34 crc kubenswrapper[4834]: E0121 16:21:34.332579 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:21:45 crc kubenswrapper[4834]: I0121 16:21:45.324867 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:21:45 crc kubenswrapper[4834]: E0121 16:21:45.325673 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.690817 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lkcm9"] Jan 21 16:21:50 crc kubenswrapper[4834]: E0121 16:21:50.691874 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerName="registry-server" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.691893 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerName="registry-server" Jan 21 16:21:50 crc kubenswrapper[4834]: E0121 16:21:50.691905 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerName="extract-utilities" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.691914 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerName="extract-utilities" Jan 21 16:21:50 crc kubenswrapper[4834]: E0121 16:21:50.691993 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerName="extract-content" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.692004 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerName="extract-content" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.692244 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6873db-3783-4cda-b552-b5b3694d23e8" containerName="registry-server" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.694051 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.700232 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkcm9"] Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.869598 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-catalog-content\") pod \"redhat-marketplace-lkcm9\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.869693 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-utilities\") pod \"redhat-marketplace-lkcm9\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.869753 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swxbp\" (UniqueName: \"kubernetes.io/projected/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-kube-api-access-swxbp\") pod \"redhat-marketplace-lkcm9\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.971700 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-utilities\") pod \"redhat-marketplace-lkcm9\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.972146 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swxbp\" (UniqueName: \"kubernetes.io/projected/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-kube-api-access-swxbp\") pod \"redhat-marketplace-lkcm9\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.972267 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-catalog-content\") pod \"redhat-marketplace-lkcm9\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.972309 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-utilities\") pod \"redhat-marketplace-lkcm9\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:50 crc kubenswrapper[4834]: I0121 16:21:50.972565 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-catalog-content\") pod \"redhat-marketplace-lkcm9\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:51 crc kubenswrapper[4834]: I0121 16:21:51.001066 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swxbp\" (UniqueName: \"kubernetes.io/projected/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-kube-api-access-swxbp\") pod \"redhat-marketplace-lkcm9\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:51 crc kubenswrapper[4834]: I0121 16:21:51.014848 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:21:51 crc kubenswrapper[4834]: I0121 16:21:51.489203 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkcm9"] Jan 21 16:21:51 crc kubenswrapper[4834]: I0121 16:21:51.565462 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkcm9" event={"ID":"bb542cfe-ac94-4cda-a92e-d2473c9afdb7","Type":"ContainerStarted","Data":"5d28e561080b7a05a9339120e9f5a60f7e6a68b43d35090ce9e4045443813bad"} Jan 21 16:21:52 crc kubenswrapper[4834]: I0121 16:21:52.575904 4834 generic.go:334] "Generic (PLEG): container finished" podID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerID="9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861" exitCode=0 Jan 21 16:21:52 crc kubenswrapper[4834]: I0121 16:21:52.575966 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkcm9" event={"ID":"bb542cfe-ac94-4cda-a92e-d2473c9afdb7","Type":"ContainerDied","Data":"9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861"} Jan 21 16:21:53 crc kubenswrapper[4834]: I0121 16:21:53.588577 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkcm9" event={"ID":"bb542cfe-ac94-4cda-a92e-d2473c9afdb7","Type":"ContainerStarted","Data":"718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c"} Jan 21 16:21:54 crc kubenswrapper[4834]: I0121 16:21:54.599627 4834 generic.go:334] "Generic (PLEG): container finished" podID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerID="718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c" exitCode=0 Jan 21 16:21:54 crc kubenswrapper[4834]: I0121 16:21:54.599727 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkcm9" event={"ID":"bb542cfe-ac94-4cda-a92e-d2473c9afdb7","Type":"ContainerDied","Data":"718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c"} Jan 21 16:21:55 crc kubenswrapper[4834]: I0121 16:21:55.611400 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkcm9" event={"ID":"bb542cfe-ac94-4cda-a92e-d2473c9afdb7","Type":"ContainerStarted","Data":"861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332"} Jan 21 16:21:55 crc kubenswrapper[4834]: I0121 16:21:55.637908 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lkcm9" podStartSLOduration=3.231005397 podStartE2EDuration="5.63788684s" podCreationTimestamp="2026-01-21 16:21:50 +0000 UTC" firstStartedPulling="2026-01-21 16:21:52.577921273 +0000 UTC m=+6658.552270318" lastFinishedPulling="2026-01-21 16:21:54.984802716 +0000 UTC m=+6660.959151761" observedRunningTime="2026-01-21 16:21:55.63278977 +0000 UTC m=+6661.607138825" watchObservedRunningTime="2026-01-21 16:21:55.63788684 +0000 UTC m=+6661.612235885" Jan 21 16:22:00 crc kubenswrapper[4834]: I0121 16:22:00.325617 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:22:00 crc kubenswrapper[4834]: E0121 16:22:00.326426 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:22:01 crc kubenswrapper[4834]: I0121 16:22:01.016118 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:22:01 crc kubenswrapper[4834]: I0121 16:22:01.016612 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:22:01 crc kubenswrapper[4834]: I0121 16:22:01.064712 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:22:01 crc kubenswrapper[4834]: I0121 16:22:01.726835 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:22:01 crc kubenswrapper[4834]: I0121 16:22:01.787358 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkcm9"] Jan 21 16:22:03 crc kubenswrapper[4834]: I0121 16:22:03.696955 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lkcm9" podUID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerName="registry-server" containerID="cri-o://861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332" gracePeriod=2 Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.257253 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.399728 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-catalog-content\") pod \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.400035 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swxbp\" (UniqueName: \"kubernetes.io/projected/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-kube-api-access-swxbp\") pod \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.400188 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-utilities\") pod \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\" (UID: \"bb542cfe-ac94-4cda-a92e-d2473c9afdb7\") " Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.401956 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-utilities" (OuterVolumeSpecName: "utilities") pod "bb542cfe-ac94-4cda-a92e-d2473c9afdb7" (UID: "bb542cfe-ac94-4cda-a92e-d2473c9afdb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.406780 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-kube-api-access-swxbp" (OuterVolumeSpecName: "kube-api-access-swxbp") pod "bb542cfe-ac94-4cda-a92e-d2473c9afdb7" (UID: "bb542cfe-ac94-4cda-a92e-d2473c9afdb7"). InnerVolumeSpecName "kube-api-access-swxbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.424175 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb542cfe-ac94-4cda-a92e-d2473c9afdb7" (UID: "bb542cfe-ac94-4cda-a92e-d2473c9afdb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.503161 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swxbp\" (UniqueName: \"kubernetes.io/projected/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-kube-api-access-swxbp\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.503195 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.503204 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb542cfe-ac94-4cda-a92e-d2473c9afdb7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.708342 4834 generic.go:334] "Generic (PLEG): container finished" podID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerID="861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332" exitCode=0 Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.708409 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkcm9" event={"ID":"bb542cfe-ac94-4cda-a92e-d2473c9afdb7","Type":"ContainerDied","Data":"861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332"} Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.708498 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lkcm9" event={"ID":"bb542cfe-ac94-4cda-a92e-d2473c9afdb7","Type":"ContainerDied","Data":"5d28e561080b7a05a9339120e9f5a60f7e6a68b43d35090ce9e4045443813bad"} Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.708524 4834 scope.go:117] "RemoveContainer" containerID="861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.708439 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lkcm9" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.732690 4834 scope.go:117] "RemoveContainer" containerID="718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.757490 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkcm9"] Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.762960 4834 scope.go:117] "RemoveContainer" containerID="9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.786447 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lkcm9"] Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.809841 4834 scope.go:117] "RemoveContainer" containerID="861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332" Jan 21 16:22:04 crc kubenswrapper[4834]: E0121 16:22:04.810386 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332\": container with ID starting with 861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332 not found: ID does not exist" containerID="861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.810437 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332"} err="failed to get container status \"861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332\": rpc error: code = NotFound desc = could not find container \"861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332\": container with ID starting with 861d194d1cc9f98cd863877f29557bdd7724d437902b52be5951370a71850332 not found: ID does not exist" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.810524 4834 scope.go:117] "RemoveContainer" containerID="718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c" Jan 21 16:22:04 crc kubenswrapper[4834]: E0121 16:22:04.810885 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c\": container with ID starting with 718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c not found: ID does not exist" containerID="718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.810946 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c"} err="failed to get container status \"718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c\": rpc error: code = NotFound desc = could not find container \"718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c\": container with ID starting with 718b5702fb396588837c69ea90d51fc2b7dc726caad725577b59afd4ffeee80c not found: ID does not exist" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.810973 4834 scope.go:117] "RemoveContainer" containerID="9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861" Jan 21 16:22:04 crc kubenswrapper[4834]: E0121 16:22:04.811677 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861\": container with ID starting with 9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861 not found: ID does not exist" containerID="9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861" Jan 21 16:22:04 crc kubenswrapper[4834]: I0121 16:22:04.811723 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861"} err="failed to get container status \"9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861\": rpc error: code = NotFound desc = could not find container \"9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861\": container with ID starting with 9750d17d47bbde7263fd935f545e9c3c4674f0cb262ba35077e402c33e66c861 not found: ID does not exist" Jan 21 16:22:06 crc kubenswrapper[4834]: I0121 16:22:06.342653 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" path="/var/lib/kubelet/pods/bb542cfe-ac94-4cda-a92e-d2473c9afdb7/volumes" Jan 21 16:22:12 crc kubenswrapper[4834]: I0121 16:22:12.326201 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:22:12 crc kubenswrapper[4834]: E0121 16:22:12.326903 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:22:25 crc kubenswrapper[4834]: I0121 16:22:25.330400 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:22:25 crc kubenswrapper[4834]: I0121 16:22:25.943169 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"e3a358d3aca747c6a700cd9cc0c37bac4ec358a5e712dc6c571d06a7b5635614"} Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.199781 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-st9rp"] Jan 21 16:23:06 crc kubenswrapper[4834]: E0121 16:23:06.200736 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerName="extract-utilities" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.200749 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerName="extract-utilities" Jan 21 16:23:06 crc kubenswrapper[4834]: E0121 16:23:06.200781 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerName="extract-content" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.200786 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerName="extract-content" Jan 21 16:23:06 crc kubenswrapper[4834]: E0121 16:23:06.200795 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerName="registry-server" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.200801 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerName="registry-server" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.201046 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb542cfe-ac94-4cda-a92e-d2473c9afdb7" containerName="registry-server" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.202667 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.214001 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-st9rp"] Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.365486 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9tzr\" (UniqueName: \"kubernetes.io/projected/515f4187-91ef-4937-89e2-acd16cdcc2a5-kube-api-access-m9tzr\") pod \"certified-operators-st9rp\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.365595 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-utilities\") pod \"certified-operators-st9rp\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.365850 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-catalog-content\") pod \"certified-operators-st9rp\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.468685 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-catalog-content\") pod \"certified-operators-st9rp\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.469163 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9tzr\" (UniqueName: \"kubernetes.io/projected/515f4187-91ef-4937-89e2-acd16cdcc2a5-kube-api-access-m9tzr\") pod \"certified-operators-st9rp\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.469332 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-utilities\") pod \"certified-operators-st9rp\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.469221 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-catalog-content\") pod \"certified-operators-st9rp\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.469795 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-utilities\") pod \"certified-operators-st9rp\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.493666 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9tzr\" (UniqueName: \"kubernetes.io/projected/515f4187-91ef-4937-89e2-acd16cdcc2a5-kube-api-access-m9tzr\") pod \"certified-operators-st9rp\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:06 crc kubenswrapper[4834]: I0121 16:23:06.523242 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:07 crc kubenswrapper[4834]: I0121 16:23:07.026152 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-st9rp"] Jan 21 16:23:07 crc kubenswrapper[4834]: I0121 16:23:07.444767 4834 generic.go:334] "Generic (PLEG): container finished" podID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerID="f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5" exitCode=0 Jan 21 16:23:07 crc kubenswrapper[4834]: I0121 16:23:07.445431 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st9rp" event={"ID":"515f4187-91ef-4937-89e2-acd16cdcc2a5","Type":"ContainerDied","Data":"f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5"} Jan 21 16:23:07 crc kubenswrapper[4834]: I0121 16:23:07.445478 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st9rp" event={"ID":"515f4187-91ef-4937-89e2-acd16cdcc2a5","Type":"ContainerStarted","Data":"878ec27eb96a033982433bb519972d4ad406c45f0b5f5b797c64a66069f256de"} Jan 21 16:23:07 crc kubenswrapper[4834]: I0121 16:23:07.458394 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:23:09 crc kubenswrapper[4834]: I0121 16:23:09.468601 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st9rp" event={"ID":"515f4187-91ef-4937-89e2-acd16cdcc2a5","Type":"ContainerStarted","Data":"09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770"} Jan 21 16:23:10 crc kubenswrapper[4834]: I0121 16:23:10.481482 4834 generic.go:334] "Generic (PLEG): container finished" podID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerID="09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770" exitCode=0 Jan 21 16:23:10 crc kubenswrapper[4834]: I0121 16:23:10.481587 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st9rp" event={"ID":"515f4187-91ef-4937-89e2-acd16cdcc2a5","Type":"ContainerDied","Data":"09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770"} Jan 21 16:23:11 crc kubenswrapper[4834]: I0121 16:23:11.494374 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st9rp" event={"ID":"515f4187-91ef-4937-89e2-acd16cdcc2a5","Type":"ContainerStarted","Data":"f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41"} Jan 21 16:23:11 crc kubenswrapper[4834]: I0121 16:23:11.524284 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-st9rp" podStartSLOduration=1.9909993 podStartE2EDuration="5.524254302s" podCreationTimestamp="2026-01-21 16:23:06 +0000 UTC" firstStartedPulling="2026-01-21 16:23:07.458033957 +0000 UTC m=+6733.432383002" lastFinishedPulling="2026-01-21 16:23:10.991288969 +0000 UTC m=+6736.965638004" observedRunningTime="2026-01-21 16:23:11.514113225 +0000 UTC m=+6737.488462320" watchObservedRunningTime="2026-01-21 16:23:11.524254302 +0000 UTC m=+6737.498603337" Jan 21 16:23:16 crc kubenswrapper[4834]: I0121 16:23:16.523625 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:16 crc kubenswrapper[4834]: I0121 16:23:16.524168 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:16 crc kubenswrapper[4834]: I0121 16:23:16.597317 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:16 crc kubenswrapper[4834]: I0121 16:23:16.668348 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:16 crc kubenswrapper[4834]: I0121 16:23:16.842758 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-st9rp"] Jan 21 16:23:18 crc kubenswrapper[4834]: I0121 16:23:18.558498 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-st9rp" podUID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerName="registry-server" containerID="cri-o://f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41" gracePeriod=2 Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.137152 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.181894 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9tzr\" (UniqueName: \"kubernetes.io/projected/515f4187-91ef-4937-89e2-acd16cdcc2a5-kube-api-access-m9tzr\") pod \"515f4187-91ef-4937-89e2-acd16cdcc2a5\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.181986 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-utilities\") pod \"515f4187-91ef-4937-89e2-acd16cdcc2a5\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.182695 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-catalog-content\") pod \"515f4187-91ef-4937-89e2-acd16cdcc2a5\" (UID: \"515f4187-91ef-4937-89e2-acd16cdcc2a5\") " Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.182909 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-utilities" (OuterVolumeSpecName: "utilities") pod "515f4187-91ef-4937-89e2-acd16cdcc2a5" (UID: "515f4187-91ef-4937-89e2-acd16cdcc2a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.187685 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515f4187-91ef-4937-89e2-acd16cdcc2a5-kube-api-access-m9tzr" (OuterVolumeSpecName: "kube-api-access-m9tzr") pod "515f4187-91ef-4937-89e2-acd16cdcc2a5" (UID: "515f4187-91ef-4937-89e2-acd16cdcc2a5"). InnerVolumeSpecName "kube-api-access-m9tzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.209460 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9tzr\" (UniqueName: \"kubernetes.io/projected/515f4187-91ef-4937-89e2-acd16cdcc2a5-kube-api-access-m9tzr\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.209500 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.241053 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "515f4187-91ef-4937-89e2-acd16cdcc2a5" (UID: "515f4187-91ef-4937-89e2-acd16cdcc2a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.311282 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/515f4187-91ef-4937-89e2-acd16cdcc2a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.570501 4834 generic.go:334] "Generic (PLEG): container finished" podID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerID="f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41" exitCode=0 Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.570571 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st9rp" event={"ID":"515f4187-91ef-4937-89e2-acd16cdcc2a5","Type":"ContainerDied","Data":"f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41"} Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.570632 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-st9rp" event={"ID":"515f4187-91ef-4937-89e2-acd16cdcc2a5","Type":"ContainerDied","Data":"878ec27eb96a033982433bb519972d4ad406c45f0b5f5b797c64a66069f256de"} Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.570658 4834 scope.go:117] "RemoveContainer" containerID="f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.570585 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-st9rp" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.615378 4834 scope.go:117] "RemoveContainer" containerID="09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.619072 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-st9rp"] Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.627225 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-st9rp"] Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.637835 4834 scope.go:117] "RemoveContainer" containerID="f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.679416 4834 scope.go:117] "RemoveContainer" containerID="f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41" Jan 21 16:23:19 crc kubenswrapper[4834]: E0121 16:23:19.679915 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41\": container with ID starting with f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41 not found: ID does not exist" containerID="f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.679988 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41"} err="failed to get container status \"f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41\": rpc error: code = NotFound desc = could not find container \"f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41\": container with ID starting with f81a59fa4e111d38d36d4fffe34ad5913aff6bd9e1bc5782388ba01c72d33b41 not found: ID does not exist" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.680016 4834 scope.go:117] "RemoveContainer" containerID="09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770" Jan 21 16:23:19 crc kubenswrapper[4834]: E0121 16:23:19.680328 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770\": container with ID starting with 09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770 not found: ID does not exist" containerID="09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.680348 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770"} err="failed to get container status \"09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770\": rpc error: code = NotFound desc = could not find container \"09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770\": container with ID starting with 09069bfe9eb3e9afe3b5cadfbfbabe5e44a2e1e2090983a63a44d630a9ccf770 not found: ID does not exist" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.680361 4834 scope.go:117] "RemoveContainer" containerID="f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5" Jan 21 16:23:19 crc kubenswrapper[4834]: E0121 16:23:19.680777 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5\": container with ID starting with f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5 not found: ID does not exist" containerID="f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5" Jan 21 16:23:19 crc kubenswrapper[4834]: I0121 16:23:19.680795 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5"} err="failed to get container status \"f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5\": rpc error: code = NotFound desc = could not find container \"f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5\": container with ID starting with f808a903d5a609968ff45389bc749d599979444d015d7af2283f6c8ebef229b5 not found: ID does not exist" Jan 21 16:23:20 crc kubenswrapper[4834]: I0121 16:23:20.339060 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515f4187-91ef-4937-89e2-acd16cdcc2a5" path="/var/lib/kubelet/pods/515f4187-91ef-4937-89e2-acd16cdcc2a5/volumes" Jan 21 16:23:29 crc kubenswrapper[4834]: I0121 16:23:29.048428 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-f7d2-account-create-update-7fkw2"] Jan 21 16:23:29 crc kubenswrapper[4834]: I0121 16:23:29.059782 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-r9qcd"] Jan 21 16:23:29 crc kubenswrapper[4834]: I0121 16:23:29.070243 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-f7d2-account-create-update-7fkw2"] Jan 21 16:23:29 crc kubenswrapper[4834]: I0121 16:23:29.082408 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-r9qcd"] Jan 21 16:23:30 crc kubenswrapper[4834]: I0121 16:23:30.338105 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62501e92-7085-47b3-82b5-8df0e62730cb" path="/var/lib/kubelet/pods/62501e92-7085-47b3-82b5-8df0e62730cb/volumes" Jan 21 16:23:30 crc kubenswrapper[4834]: I0121 16:23:30.338919 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3" path="/var/lib/kubelet/pods/e7f3c58b-1b3f-43bc-9fc4-9f9f7f58bee3/volumes" Jan 21 16:23:42 crc kubenswrapper[4834]: I0121 16:23:42.044064 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-gq75p"] Jan 21 16:23:42 crc kubenswrapper[4834]: I0121 16:23:42.053283 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-gq75p"] Jan 21 16:23:42 crc kubenswrapper[4834]: I0121 16:23:42.337664 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f11ed7-1733-4047-83a5-51863b053069" path="/var/lib/kubelet/pods/c4f11ed7-1733-4047-83a5-51863b053069/volumes" Jan 21 16:24:13 crc kubenswrapper[4834]: I0121 16:24:13.834676 4834 scope.go:117] "RemoveContainer" containerID="84c3df88a7734c2e51a4eb9f6943111a9c22acc25833c352c000d578c29047f7" Jan 21 16:24:13 crc kubenswrapper[4834]: I0121 16:24:13.872646 4834 scope.go:117] "RemoveContainer" containerID="0ebd3426a8beeb259b0e96c2f25afd824d3cb13191a2a8e0e3934ea43414a6f5" Jan 21 16:24:13 crc kubenswrapper[4834]: I0121 16:24:13.938625 4834 scope.go:117] "RemoveContainer" containerID="0457c02e913e95705d27c31d13a4e38fcd78ca6cf7fc8bc8da0752143b6de5f3" Jan 21 16:24:47 crc kubenswrapper[4834]: I0121 16:24:47.114517 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:24:47 crc kubenswrapper[4834]: I0121 16:24:47.115178 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:25:17 crc kubenswrapper[4834]: I0121 16:25:17.113714 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:25:17 crc kubenswrapper[4834]: I0121 16:25:17.114369 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:25:47 crc kubenswrapper[4834]: I0121 16:25:47.113732 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:25:47 crc kubenswrapper[4834]: I0121 16:25:47.114281 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:25:47 crc kubenswrapper[4834]: I0121 16:25:47.114325 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:25:47 crc kubenswrapper[4834]: I0121 16:25:47.115234 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3a358d3aca747c6a700cd9cc0c37bac4ec358a5e712dc6c571d06a7b5635614"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:25:47 crc kubenswrapper[4834]: I0121 16:25:47.115312 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://e3a358d3aca747c6a700cd9cc0c37bac4ec358a5e712dc6c571d06a7b5635614" gracePeriod=600 Jan 21 16:25:48 crc kubenswrapper[4834]: I0121 16:25:48.101866 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="e3a358d3aca747c6a700cd9cc0c37bac4ec358a5e712dc6c571d06a7b5635614" exitCode=0 Jan 21 16:25:48 crc kubenswrapper[4834]: I0121 16:25:48.101940 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"e3a358d3aca747c6a700cd9cc0c37bac4ec358a5e712dc6c571d06a7b5635614"} Jan 21 16:25:48 crc kubenswrapper[4834]: I0121 16:25:48.102452 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093"} Jan 21 16:25:48 crc kubenswrapper[4834]: I0121 16:25:48.102477 4834 scope.go:117] "RemoveContainer" containerID="46acbbd365d68d429f818ca85c66663660cf87599fc7bcff29ee24517c5524c7" Jan 21 16:25:52 crc kubenswrapper[4834]: I0121 16:25:52.038073 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-2a0e-account-create-update-2pmzh"] Jan 21 16:25:52 crc kubenswrapper[4834]: I0121 16:25:52.051110 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-2a0e-account-create-update-2pmzh"] Jan 21 16:25:52 crc kubenswrapper[4834]: I0121 16:25:52.340266 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1acf71cc-5311-4be5-9499-3eb5c42ac831" path="/var/lib/kubelet/pods/1acf71cc-5311-4be5-9499-3eb5c42ac831/volumes" Jan 21 16:25:53 crc kubenswrapper[4834]: I0121 16:25:53.031081 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-xtvl5"] Jan 21 16:25:53 crc kubenswrapper[4834]: I0121 16:25:53.042148 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-xtvl5"] Jan 21 16:25:54 crc kubenswrapper[4834]: I0121 16:25:54.342088 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbab3f15-8f0e-4213-9c94-75068cd1502d" path="/var/lib/kubelet/pods/dbab3f15-8f0e-4213-9c94-75068cd1502d/volumes" Jan 21 16:26:04 crc kubenswrapper[4834]: I0121 16:26:04.035650 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-4s6l2"] Jan 21 16:26:04 crc kubenswrapper[4834]: I0121 16:26:04.045485 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-4s6l2"] Jan 21 16:26:04 crc kubenswrapper[4834]: I0121 16:26:04.364044 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a17e206-cd5a-43f0-b515-941e6664a63e" path="/var/lib/kubelet/pods/2a17e206-cd5a-43f0-b515-941e6664a63e/volumes" Jan 21 16:26:14 crc kubenswrapper[4834]: I0121 16:26:14.067526 4834 scope.go:117] "RemoveContainer" containerID="9b8c368dc48e3799aef0f3a6b4eeedf3a4ddf18603eec5afe92a0e49e46cb988" Jan 21 16:26:14 crc kubenswrapper[4834]: I0121 16:26:14.098116 4834 scope.go:117] "RemoveContainer" containerID="9ab268761ea405d6875465a646c34c8fb600432bd7c6d6971364760179237810" Jan 21 16:26:14 crc kubenswrapper[4834]: I0121 16:26:14.160081 4834 scope.go:117] "RemoveContainer" containerID="258b12302bf0faa45a35215d8f5a279e9ee3bb763f0bceed1f7b5cf83b673856" Jan 21 16:26:25 crc kubenswrapper[4834]: I0121 16:26:25.044859 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-42f3-account-create-update-tq7hk"] Jan 21 16:26:25 crc kubenswrapper[4834]: I0121 16:26:25.054624 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-2zsnf"] Jan 21 16:26:25 crc kubenswrapper[4834]: I0121 16:26:25.062839 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-2zsnf"] Jan 21 16:26:25 crc kubenswrapper[4834]: I0121 16:26:25.070658 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-42f3-account-create-update-tq7hk"] Jan 21 16:26:26 crc kubenswrapper[4834]: I0121 16:26:26.340840 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c9abaa2-24c9-448b-b82c-15cf02145f6c" path="/var/lib/kubelet/pods/9c9abaa2-24c9-448b-b82c-15cf02145f6c/volumes" Jan 21 16:26:26 crc kubenswrapper[4834]: I0121 16:26:26.342767 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7423589-b71a-4f9d-967f-ea1591657d19" path="/var/lib/kubelet/pods/a7423589-b71a-4f9d-967f-ea1591657d19/volumes" Jan 21 16:26:39 crc kubenswrapper[4834]: I0121 16:26:39.048615 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-hxrb7"] Jan 21 16:26:39 crc kubenswrapper[4834]: I0121 16:26:39.060874 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-hxrb7"] Jan 21 16:26:40 crc kubenswrapper[4834]: I0121 16:26:40.337598 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b1064e-c353-4f3a-bacb-739206fcde86" path="/var/lib/kubelet/pods/f1b1064e-c353-4f3a-bacb-739206fcde86/volumes" Jan 21 16:27:14 crc kubenswrapper[4834]: I0121 16:27:14.298535 4834 scope.go:117] "RemoveContainer" containerID="1c62c4a7b44307e473935e5922af7bcf55ecff5044585fba505b331a98116f17" Jan 21 16:27:14 crc kubenswrapper[4834]: I0121 16:27:14.330603 4834 scope.go:117] "RemoveContainer" containerID="d70c6dd8dabcd18df5ca365a15f151a9b1c6f5ff288e5d56877cb395583d1523" Jan 21 16:27:14 crc kubenswrapper[4834]: I0121 16:27:14.416814 4834 scope.go:117] "RemoveContainer" containerID="7af80ffc4ea331b8df5306b01017645f0cc1c9700125ef7af339afd0f59ba1dd" Jan 21 16:27:47 crc kubenswrapper[4834]: I0121 16:27:47.114403 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:27:47 crc kubenswrapper[4834]: I0121 16:27:47.114807 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:28:17 crc kubenswrapper[4834]: I0121 16:28:17.114209 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:28:17 crc kubenswrapper[4834]: I0121 16:28:17.114959 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:28:47 crc kubenswrapper[4834]: I0121 16:28:47.113618 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:28:47 crc kubenswrapper[4834]: I0121 16:28:47.114123 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:28:47 crc kubenswrapper[4834]: I0121 16:28:47.114180 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:28:47 crc kubenswrapper[4834]: I0121 16:28:47.115106 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:28:47 crc kubenswrapper[4834]: I0121 16:28:47.115173 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" gracePeriod=600 Jan 21 16:28:47 crc kubenswrapper[4834]: E0121 16:28:47.244740 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:28:47 crc kubenswrapper[4834]: I0121 16:28:47.924362 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" exitCode=0 Jan 21 16:28:47 crc kubenswrapper[4834]: I0121 16:28:47.924456 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093"} Jan 21 16:28:47 crc kubenswrapper[4834]: I0121 16:28:47.924750 4834 scope.go:117] "RemoveContainer" containerID="e3a358d3aca747c6a700cd9cc0c37bac4ec358a5e712dc6c571d06a7b5635614" Jan 21 16:28:47 crc kubenswrapper[4834]: I0121 16:28:47.926143 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:28:47 crc kubenswrapper[4834]: E0121 16:28:47.926760 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:28:59 crc kubenswrapper[4834]: I0121 16:28:59.325263 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:28:59 crc kubenswrapper[4834]: E0121 16:28:59.328049 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:29:04 crc kubenswrapper[4834]: I0121 16:29:04.086358 4834 generic.go:334] "Generic (PLEG): container finished" podID="9d206bcf-791e-4e8e-bddf-faf2365abf8c" containerID="1d2d155e389cbc6166d8ced4c8b8cf81d7d1439d921efb8d71b3948fc1fdb597" exitCode=0 Jan 21 16:29:04 crc kubenswrapper[4834]: I0121 16:29:04.086419 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" event={"ID":"9d206bcf-791e-4e8e-bddf-faf2365abf8c","Type":"ContainerDied","Data":"1d2d155e389cbc6166d8ced4c8b8cf81d7d1439d921efb8d71b3948fc1fdb597"} Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.621474 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.707723 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ssh-key-openstack-cell1\") pod \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.708133 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ceph\") pod \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.708346 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmz8q\" (UniqueName: \"kubernetes.io/projected/9d206bcf-791e-4e8e-bddf-faf2365abf8c-kube-api-access-wmz8q\") pod \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.708457 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-tripleo-cleanup-combined-ca-bundle\") pod \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.708832 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-inventory\") pod \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\" (UID: \"9d206bcf-791e-4e8e-bddf-faf2365abf8c\") " Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.714142 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ceph" (OuterVolumeSpecName: "ceph") pod "9d206bcf-791e-4e8e-bddf-faf2365abf8c" (UID: "9d206bcf-791e-4e8e-bddf-faf2365abf8c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.714207 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "9d206bcf-791e-4e8e-bddf-faf2365abf8c" (UID: "9d206bcf-791e-4e8e-bddf-faf2365abf8c"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.714753 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d206bcf-791e-4e8e-bddf-faf2365abf8c-kube-api-access-wmz8q" (OuterVolumeSpecName: "kube-api-access-wmz8q") pod "9d206bcf-791e-4e8e-bddf-faf2365abf8c" (UID: "9d206bcf-791e-4e8e-bddf-faf2365abf8c"). InnerVolumeSpecName "kube-api-access-wmz8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.743430 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "9d206bcf-791e-4e8e-bddf-faf2365abf8c" (UID: "9d206bcf-791e-4e8e-bddf-faf2365abf8c"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.745645 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-inventory" (OuterVolumeSpecName: "inventory") pod "9d206bcf-791e-4e8e-bddf-faf2365abf8c" (UID: "9d206bcf-791e-4e8e-bddf-faf2365abf8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.811920 4834 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.811980 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.811990 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.812003 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d206bcf-791e-4e8e-bddf-faf2365abf8c-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:29:05 crc kubenswrapper[4834]: I0121 16:29:05.812011 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmz8q\" (UniqueName: \"kubernetes.io/projected/9d206bcf-791e-4e8e-bddf-faf2365abf8c-kube-api-access-wmz8q\") on node \"crc\" DevicePath \"\"" Jan 21 16:29:06 crc kubenswrapper[4834]: I0121 16:29:06.110499 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" event={"ID":"9d206bcf-791e-4e8e-bddf-faf2365abf8c","Type":"ContainerDied","Data":"768277a2a0fed17914260b427c384023e522ee25bc9428b7399a13c1ebddbb0e"} Jan 21 16:29:06 crc kubenswrapper[4834]: I0121 16:29:06.110560 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="768277a2a0fed17914260b427c384023e522ee25bc9428b7399a13c1ebddbb0e" Jan 21 16:29:06 crc kubenswrapper[4834]: I0121 16:29:06.110998 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62" Jan 21 16:29:10 crc kubenswrapper[4834]: I0121 16:29:10.325370 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:29:10 crc kubenswrapper[4834]: E0121 16:29:10.326285 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.543900 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-sgmzf"] Jan 21 16:29:16 crc kubenswrapper[4834]: E0121 16:29:16.545136 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerName="extract-content" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.545151 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerName="extract-content" Jan 21 16:29:16 crc kubenswrapper[4834]: E0121 16:29:16.545172 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerName="registry-server" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.545178 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerName="registry-server" Jan 21 16:29:16 crc kubenswrapper[4834]: E0121 16:29:16.545200 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d206bcf-791e-4e8e-bddf-faf2365abf8c" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.545208 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d206bcf-791e-4e8e-bddf-faf2365abf8c" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 21 16:29:16 crc kubenswrapper[4834]: E0121 16:29:16.545228 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerName="extract-utilities" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.545234 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerName="extract-utilities" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.545436 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="515f4187-91ef-4937-89e2-acd16cdcc2a5" containerName="registry-server" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.545460 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d206bcf-791e-4e8e-bddf-faf2365abf8c" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.546379 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.549763 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.549859 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.549972 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.550071 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.555208 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-sgmzf"] Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.654351 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-inventory\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.654423 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.654527 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7lvf\" (UniqueName: \"kubernetes.io/projected/3fc69335-1de8-4e41-a128-cc4f162719f1-kube-api-access-n7lvf\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.654645 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ceph\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.655017 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.756595 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-inventory\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.756652 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.756710 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7lvf\" (UniqueName: \"kubernetes.io/projected/3fc69335-1de8-4e41-a128-cc4f162719f1-kube-api-access-n7lvf\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.756796 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ceph\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.756908 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.762745 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.763296 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ceph\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.763402 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.773376 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-inventory\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.777208 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7lvf\" (UniqueName: \"kubernetes.io/projected/3fc69335-1de8-4e41-a128-cc4f162719f1-kube-api-access-n7lvf\") pod \"bootstrap-openstack-openstack-cell1-sgmzf\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:16 crc kubenswrapper[4834]: I0121 16:29:16.868121 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:29:17 crc kubenswrapper[4834]: I0121 16:29:17.450559 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-sgmzf"] Jan 21 16:29:17 crc kubenswrapper[4834]: I0121 16:29:17.458450 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:29:18 crc kubenswrapper[4834]: I0121 16:29:18.220052 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" event={"ID":"3fc69335-1de8-4e41-a128-cc4f162719f1","Type":"ContainerStarted","Data":"bf4694c4e21103fff55f51bfb5ba4d5d5fcea12b0de99bccf256a80977978e6d"} Jan 21 16:29:19 crc kubenswrapper[4834]: I0121 16:29:19.233285 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" event={"ID":"3fc69335-1de8-4e41-a128-cc4f162719f1","Type":"ContainerStarted","Data":"bded3b5d8797c25e5a45cc34ea30c3853bbaa4659e7415dd717eff5bdeb812a0"} Jan 21 16:29:19 crc kubenswrapper[4834]: I0121 16:29:19.261632 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" podStartSLOduration=2.608366164 podStartE2EDuration="3.260909301s" podCreationTimestamp="2026-01-21 16:29:16 +0000 UTC" firstStartedPulling="2026-01-21 16:29:17.45824537 +0000 UTC m=+7103.432594405" lastFinishedPulling="2026-01-21 16:29:18.110788507 +0000 UTC m=+7104.085137542" observedRunningTime="2026-01-21 16:29:19.252897959 +0000 UTC m=+7105.227247004" watchObservedRunningTime="2026-01-21 16:29:19.260909301 +0000 UTC m=+7105.235258346" Jan 21 16:29:21 crc kubenswrapper[4834]: I0121 16:29:21.324603 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:29:21 crc kubenswrapper[4834]: E0121 16:29:21.325586 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:29:34 crc kubenswrapper[4834]: I0121 16:29:34.333174 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:29:34 crc kubenswrapper[4834]: E0121 16:29:34.334008 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:29:49 crc kubenswrapper[4834]: I0121 16:29:49.324460 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:29:49 crc kubenswrapper[4834]: E0121 16:29:49.325283 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.154111 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6"] Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.157555 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.160483 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.160614 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.175499 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6"] Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.277378 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f9c0c9c-7306-41ef-a294-eb99498e8aee-secret-volume\") pod \"collect-profiles-29483550-z8nb6\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.277410 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8s6v\" (UniqueName: \"kubernetes.io/projected/1f9c0c9c-7306-41ef-a294-eb99498e8aee-kube-api-access-j8s6v\") pod \"collect-profiles-29483550-z8nb6\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.277456 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f9c0c9c-7306-41ef-a294-eb99498e8aee-config-volume\") pod \"collect-profiles-29483550-z8nb6\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.379296 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f9c0c9c-7306-41ef-a294-eb99498e8aee-config-volume\") pod \"collect-profiles-29483550-z8nb6\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.379708 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f9c0c9c-7306-41ef-a294-eb99498e8aee-secret-volume\") pod \"collect-profiles-29483550-z8nb6\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.379740 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8s6v\" (UniqueName: \"kubernetes.io/projected/1f9c0c9c-7306-41ef-a294-eb99498e8aee-kube-api-access-j8s6v\") pod \"collect-profiles-29483550-z8nb6\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.382787 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f9c0c9c-7306-41ef-a294-eb99498e8aee-config-volume\") pod \"collect-profiles-29483550-z8nb6\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.386685 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f9c0c9c-7306-41ef-a294-eb99498e8aee-secret-volume\") pod \"collect-profiles-29483550-z8nb6\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.401027 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8s6v\" (UniqueName: \"kubernetes.io/projected/1f9c0c9c-7306-41ef-a294-eb99498e8aee-kube-api-access-j8s6v\") pod \"collect-profiles-29483550-z8nb6\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.487059 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:00 crc kubenswrapper[4834]: I0121 16:30:00.985826 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6"] Jan 21 16:30:01 crc kubenswrapper[4834]: I0121 16:30:01.706604 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" event={"ID":"1f9c0c9c-7306-41ef-a294-eb99498e8aee","Type":"ContainerStarted","Data":"741bf92b32217ad9b7cec681775c01f03d7fa03c25a2b70a1a60ea91777a49e8"} Jan 21 16:30:01 crc kubenswrapper[4834]: I0121 16:30:01.707156 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" event={"ID":"1f9c0c9c-7306-41ef-a294-eb99498e8aee","Type":"ContainerStarted","Data":"949a1ac149800433be43d01eefd170569b54ebff0a45fd765b8df405fb3dc65e"} Jan 21 16:30:02 crc kubenswrapper[4834]: I0121 16:30:02.717539 4834 generic.go:334] "Generic (PLEG): container finished" podID="1f9c0c9c-7306-41ef-a294-eb99498e8aee" containerID="741bf92b32217ad9b7cec681775c01f03d7fa03c25a2b70a1a60ea91777a49e8" exitCode=0 Jan 21 16:30:02 crc kubenswrapper[4834]: I0121 16:30:02.717595 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" event={"ID":"1f9c0c9c-7306-41ef-a294-eb99498e8aee","Type":"ContainerDied","Data":"741bf92b32217ad9b7cec681775c01f03d7fa03c25a2b70a1a60ea91777a49e8"} Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.078016 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.253915 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8s6v\" (UniqueName: \"kubernetes.io/projected/1f9c0c9c-7306-41ef-a294-eb99498e8aee-kube-api-access-j8s6v\") pod \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.254109 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f9c0c9c-7306-41ef-a294-eb99498e8aee-config-volume\") pod \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.254518 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f9c0c9c-7306-41ef-a294-eb99498e8aee-secret-volume\") pod \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\" (UID: \"1f9c0c9c-7306-41ef-a294-eb99498e8aee\") " Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.254744 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f9c0c9c-7306-41ef-a294-eb99498e8aee-config-volume" (OuterVolumeSpecName: "config-volume") pod "1f9c0c9c-7306-41ef-a294-eb99498e8aee" (UID: "1f9c0c9c-7306-41ef-a294-eb99498e8aee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.256766 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f9c0c9c-7306-41ef-a294-eb99498e8aee-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.261659 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9c0c9c-7306-41ef-a294-eb99498e8aee-kube-api-access-j8s6v" (OuterVolumeSpecName: "kube-api-access-j8s6v") pod "1f9c0c9c-7306-41ef-a294-eb99498e8aee" (UID: "1f9c0c9c-7306-41ef-a294-eb99498e8aee"). InnerVolumeSpecName "kube-api-access-j8s6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.263335 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f9c0c9c-7306-41ef-a294-eb99498e8aee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1f9c0c9c-7306-41ef-a294-eb99498e8aee" (UID: "1f9c0c9c-7306-41ef-a294-eb99498e8aee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.325263 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:30:03 crc kubenswrapper[4834]: E0121 16:30:03.325657 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.359050 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f9c0c9c-7306-41ef-a294-eb99498e8aee-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.359107 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8s6v\" (UniqueName: \"kubernetes.io/projected/1f9c0c9c-7306-41ef-a294-eb99498e8aee-kube-api-access-j8s6v\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.734565 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" event={"ID":"1f9c0c9c-7306-41ef-a294-eb99498e8aee","Type":"ContainerDied","Data":"949a1ac149800433be43d01eefd170569b54ebff0a45fd765b8df405fb3dc65e"} Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.734945 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="949a1ac149800433be43d01eefd170569b54ebff0a45fd765b8df405fb3dc65e" Jan 21 16:30:03 crc kubenswrapper[4834]: I0121 16:30:03.734628 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6" Jan 21 16:30:04 crc kubenswrapper[4834]: I0121 16:30:04.163976 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg"] Jan 21 16:30:04 crc kubenswrapper[4834]: I0121 16:30:04.172460 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-jb4pg"] Jan 21 16:30:04 crc kubenswrapper[4834]: I0121 16:30:04.345229 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e1fdf4-1cdc-40f8-9395-1e2416d06f5b" path="/var/lib/kubelet/pods/55e1fdf4-1cdc-40f8-9395-1e2416d06f5b/volumes" Jan 21 16:30:14 crc kubenswrapper[4834]: I0121 16:30:14.333594 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:30:14 crc kubenswrapper[4834]: E0121 16:30:14.334485 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:30:14 crc kubenswrapper[4834]: I0121 16:30:14.602954 4834 scope.go:117] "RemoveContainer" containerID="f74f807651cbc5cc8be3d9b845d6a3b93b18bf4be5052b8d8a6ba12437a1a606" Jan 21 16:30:28 crc kubenswrapper[4834]: I0121 16:30:28.325153 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:30:28 crc kubenswrapper[4834]: E0121 16:30:28.326400 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:30:29 crc kubenswrapper[4834]: I0121 16:30:29.999249 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-psxwh"] Jan 21 16:30:30 crc kubenswrapper[4834]: E0121 16:30:30.001014 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9c0c9c-7306-41ef-a294-eb99498e8aee" containerName="collect-profiles" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.001159 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9c0c9c-7306-41ef-a294-eb99498e8aee" containerName="collect-profiles" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.001738 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9c0c9c-7306-41ef-a294-eb99498e8aee" containerName="collect-profiles" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.006122 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.017682 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psxwh"] Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.083630 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-catalog-content\") pod \"redhat-operators-psxwh\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.083700 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkr4\" (UniqueName: \"kubernetes.io/projected/fdd8eaba-381c-4490-a333-10b92dc1d8ad-kube-api-access-gzkr4\") pod \"redhat-operators-psxwh\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.083819 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-utilities\") pod \"redhat-operators-psxwh\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.186708 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-catalog-content\") pod \"redhat-operators-psxwh\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.187083 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkr4\" (UniqueName: \"kubernetes.io/projected/fdd8eaba-381c-4490-a333-10b92dc1d8ad-kube-api-access-gzkr4\") pod \"redhat-operators-psxwh\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.187285 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-utilities\") pod \"redhat-operators-psxwh\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.187548 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-catalog-content\") pod \"redhat-operators-psxwh\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.187803 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-utilities\") pod \"redhat-operators-psxwh\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.223367 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkr4\" (UniqueName: \"kubernetes.io/projected/fdd8eaba-381c-4490-a333-10b92dc1d8ad-kube-api-access-gzkr4\") pod \"redhat-operators-psxwh\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:30 crc kubenswrapper[4834]: I0121 16:30:30.329177 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:31 crc kubenswrapper[4834]: I0121 16:30:31.094372 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psxwh"] Jan 21 16:30:31 crc kubenswrapper[4834]: I0121 16:30:31.996972 4834 generic.go:334] "Generic (PLEG): container finished" podID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerID="d824c1e27e655776189b02fed95e301fda107e370cea153f99532a3bcc39fd4c" exitCode=0 Jan 21 16:30:31 crc kubenswrapper[4834]: I0121 16:30:31.997035 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psxwh" event={"ID":"fdd8eaba-381c-4490-a333-10b92dc1d8ad","Type":"ContainerDied","Data":"d824c1e27e655776189b02fed95e301fda107e370cea153f99532a3bcc39fd4c"} Jan 21 16:30:31 crc kubenswrapper[4834]: I0121 16:30:31.997406 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psxwh" event={"ID":"fdd8eaba-381c-4490-a333-10b92dc1d8ad","Type":"ContainerStarted","Data":"1776670eb6d50ea606aedf466262e7f6c2409594f5bb1b0d5245330c610e6cc3"} Jan 21 16:30:34 crc kubenswrapper[4834]: I0121 16:30:34.017657 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psxwh" event={"ID":"fdd8eaba-381c-4490-a333-10b92dc1d8ad","Type":"ContainerStarted","Data":"dde68d08c76fe4be8702a635f67bc5ac90aff6a5be70550cfd29c3768f4bc4e3"} Jan 21 16:30:38 crc kubenswrapper[4834]: I0121 16:30:38.059538 4834 generic.go:334] "Generic (PLEG): container finished" podID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerID="dde68d08c76fe4be8702a635f67bc5ac90aff6a5be70550cfd29c3768f4bc4e3" exitCode=0 Jan 21 16:30:38 crc kubenswrapper[4834]: I0121 16:30:38.059747 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psxwh" event={"ID":"fdd8eaba-381c-4490-a333-10b92dc1d8ad","Type":"ContainerDied","Data":"dde68d08c76fe4be8702a635f67bc5ac90aff6a5be70550cfd29c3768f4bc4e3"} Jan 21 16:30:39 crc kubenswrapper[4834]: I0121 16:30:39.071391 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psxwh" event={"ID":"fdd8eaba-381c-4490-a333-10b92dc1d8ad","Type":"ContainerStarted","Data":"79ba202618bf28e59bd417398885cff1d593b44537a07314428ee19ff2118b5a"} Jan 21 16:30:39 crc kubenswrapper[4834]: I0121 16:30:39.099004 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-psxwh" podStartSLOduration=3.582425882 podStartE2EDuration="10.098983321s" podCreationTimestamp="2026-01-21 16:30:29 +0000 UTC" firstStartedPulling="2026-01-21 16:30:31.999010899 +0000 UTC m=+7177.973359944" lastFinishedPulling="2026-01-21 16:30:38.515568318 +0000 UTC m=+7184.489917383" observedRunningTime="2026-01-21 16:30:39.092353443 +0000 UTC m=+7185.066702498" watchObservedRunningTime="2026-01-21 16:30:39.098983321 +0000 UTC m=+7185.073332366" Jan 21 16:30:40 crc kubenswrapper[4834]: I0121 16:30:40.358225 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:40 crc kubenswrapper[4834]: I0121 16:30:40.358632 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:41 crc kubenswrapper[4834]: I0121 16:30:41.392379 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-psxwh" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerName="registry-server" probeResult="failure" output=< Jan 21 16:30:41 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 16:30:41 crc kubenswrapper[4834]: > Jan 21 16:30:42 crc kubenswrapper[4834]: I0121 16:30:42.324576 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:30:42 crc kubenswrapper[4834]: E0121 16:30:42.325269 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.347574 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wst42"] Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.358736 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.365165 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wst42"] Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.385430 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-utilities\") pod \"community-operators-wst42\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.385550 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvft\" (UniqueName: \"kubernetes.io/projected/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-kube-api-access-xqvft\") pod \"community-operators-wst42\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.385572 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-catalog-content\") pod \"community-operators-wst42\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.487812 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-utilities\") pod \"community-operators-wst42\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.488347 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqvft\" (UniqueName: \"kubernetes.io/projected/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-kube-api-access-xqvft\") pod \"community-operators-wst42\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.488499 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-catalog-content\") pod \"community-operators-wst42\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.488501 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-utilities\") pod \"community-operators-wst42\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.488833 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-catalog-content\") pod \"community-operators-wst42\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.523598 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqvft\" (UniqueName: \"kubernetes.io/projected/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-kube-api-access-xqvft\") pod \"community-operators-wst42\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:45 crc kubenswrapper[4834]: I0121 16:30:45.698281 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:46 crc kubenswrapper[4834]: I0121 16:30:46.304153 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wst42"] Jan 21 16:30:47 crc kubenswrapper[4834]: I0121 16:30:47.150652 4834 generic.go:334] "Generic (PLEG): container finished" podID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerID="14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46" exitCode=0 Jan 21 16:30:47 crc kubenswrapper[4834]: I0121 16:30:47.150713 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wst42" event={"ID":"f214c114-0f86-4fd6-8e9b-5a1aa89204f3","Type":"ContainerDied","Data":"14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46"} Jan 21 16:30:47 crc kubenswrapper[4834]: I0121 16:30:47.151827 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wst42" event={"ID":"f214c114-0f86-4fd6-8e9b-5a1aa89204f3","Type":"ContainerStarted","Data":"8bf763636a6c31e7efcb0c7c0751caef3051bfb0da0fc42cca82b87252451ba4"} Jan 21 16:30:48 crc kubenswrapper[4834]: I0121 16:30:48.163315 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wst42" event={"ID":"f214c114-0f86-4fd6-8e9b-5a1aa89204f3","Type":"ContainerStarted","Data":"bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645"} Jan 21 16:30:50 crc kubenswrapper[4834]: I0121 16:30:50.225415 4834 generic.go:334] "Generic (PLEG): container finished" podID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerID="bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645" exitCode=0 Jan 21 16:30:50 crc kubenswrapper[4834]: I0121 16:30:50.225441 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wst42" event={"ID":"f214c114-0f86-4fd6-8e9b-5a1aa89204f3","Type":"ContainerDied","Data":"bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645"} Jan 21 16:30:50 crc kubenswrapper[4834]: I0121 16:30:50.378473 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:50 crc kubenswrapper[4834]: I0121 16:30:50.447086 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:51 crc kubenswrapper[4834]: I0121 16:30:51.722725 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psxwh"] Jan 21 16:30:52 crc kubenswrapper[4834]: I0121 16:30:52.250067 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wst42" event={"ID":"f214c114-0f86-4fd6-8e9b-5a1aa89204f3","Type":"ContainerStarted","Data":"686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38"} Jan 21 16:30:52 crc kubenswrapper[4834]: I0121 16:30:52.250263 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-psxwh" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerName="registry-server" containerID="cri-o://79ba202618bf28e59bd417398885cff1d593b44537a07314428ee19ff2118b5a" gracePeriod=2 Jan 21 16:30:52 crc kubenswrapper[4834]: I0121 16:30:52.297169 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wst42" podStartSLOduration=2.552001337 podStartE2EDuration="7.297145297s" podCreationTimestamp="2026-01-21 16:30:45 +0000 UTC" firstStartedPulling="2026-01-21 16:30:47.156439065 +0000 UTC m=+7193.130788110" lastFinishedPulling="2026-01-21 16:30:51.901583025 +0000 UTC m=+7197.875932070" observedRunningTime="2026-01-21 16:30:52.276569483 +0000 UTC m=+7198.250918538" watchObservedRunningTime="2026-01-21 16:30:52.297145297 +0000 UTC m=+7198.271494342" Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.271114 4834 generic.go:334] "Generic (PLEG): container finished" podID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerID="79ba202618bf28e59bd417398885cff1d593b44537a07314428ee19ff2118b5a" exitCode=0 Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.271376 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psxwh" event={"ID":"fdd8eaba-381c-4490-a333-10b92dc1d8ad","Type":"ContainerDied","Data":"79ba202618bf28e59bd417398885cff1d593b44537a07314428ee19ff2118b5a"} Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.271745 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psxwh" event={"ID":"fdd8eaba-381c-4490-a333-10b92dc1d8ad","Type":"ContainerDied","Data":"1776670eb6d50ea606aedf466262e7f6c2409594f5bb1b0d5245330c610e6cc3"} Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.271772 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1776670eb6d50ea606aedf466262e7f6c2409594f5bb1b0d5245330c610e6cc3" Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.350285 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.503119 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-catalog-content\") pod \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.503349 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzkr4\" (UniqueName: \"kubernetes.io/projected/fdd8eaba-381c-4490-a333-10b92dc1d8ad-kube-api-access-gzkr4\") pod \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.503493 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-utilities\") pod \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\" (UID: \"fdd8eaba-381c-4490-a333-10b92dc1d8ad\") " Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.504524 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-utilities" (OuterVolumeSpecName: "utilities") pod "fdd8eaba-381c-4490-a333-10b92dc1d8ad" (UID: "fdd8eaba-381c-4490-a333-10b92dc1d8ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.511222 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd8eaba-381c-4490-a333-10b92dc1d8ad-kube-api-access-gzkr4" (OuterVolumeSpecName: "kube-api-access-gzkr4") pod "fdd8eaba-381c-4490-a333-10b92dc1d8ad" (UID: "fdd8eaba-381c-4490-a333-10b92dc1d8ad"). InnerVolumeSpecName "kube-api-access-gzkr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.607011 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzkr4\" (UniqueName: \"kubernetes.io/projected/fdd8eaba-381c-4490-a333-10b92dc1d8ad-kube-api-access-gzkr4\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.607046 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.625113 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdd8eaba-381c-4490-a333-10b92dc1d8ad" (UID: "fdd8eaba-381c-4490-a333-10b92dc1d8ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:30:53 crc kubenswrapper[4834]: I0121 16:30:53.708808 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd8eaba-381c-4490-a333-10b92dc1d8ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:54 crc kubenswrapper[4834]: I0121 16:30:54.279823 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psxwh" Jan 21 16:30:54 crc kubenswrapper[4834]: I0121 16:30:54.321569 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psxwh"] Jan 21 16:30:54 crc kubenswrapper[4834]: I0121 16:30:54.338616 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-psxwh"] Jan 21 16:30:55 crc kubenswrapper[4834]: I0121 16:30:55.698537 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:55 crc kubenswrapper[4834]: I0121 16:30:55.698971 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:55 crc kubenswrapper[4834]: I0121 16:30:55.750236 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wst42" Jan 21 16:30:56 crc kubenswrapper[4834]: I0121 16:30:56.324900 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:30:56 crc kubenswrapper[4834]: E0121 16:30:56.325204 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:30:56 crc kubenswrapper[4834]: I0121 16:30:56.336521 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" path="/var/lib/kubelet/pods/fdd8eaba-381c-4490-a333-10b92dc1d8ad/volumes" Jan 21 16:31:05 crc kubenswrapper[4834]: I0121 16:31:05.757125 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wst42" Jan 21 16:31:05 crc kubenswrapper[4834]: I0121 16:31:05.808542 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wst42"] Jan 21 16:31:06 crc kubenswrapper[4834]: I0121 16:31:06.409232 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wst42" podUID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerName="registry-server" containerID="cri-o://686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38" gracePeriod=2 Jan 21 16:31:06 crc kubenswrapper[4834]: I0121 16:31:06.940315 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wst42" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.118286 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqvft\" (UniqueName: \"kubernetes.io/projected/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-kube-api-access-xqvft\") pod \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.118324 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-catalog-content\") pod \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.118417 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-utilities\") pod \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\" (UID: \"f214c114-0f86-4fd6-8e9b-5a1aa89204f3\") " Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.119431 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-utilities" (OuterVolumeSpecName: "utilities") pod "f214c114-0f86-4fd6-8e9b-5a1aa89204f3" (UID: "f214c114-0f86-4fd6-8e9b-5a1aa89204f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.124199 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-kube-api-access-xqvft" (OuterVolumeSpecName: "kube-api-access-xqvft") pod "f214c114-0f86-4fd6-8e9b-5a1aa89204f3" (UID: "f214c114-0f86-4fd6-8e9b-5a1aa89204f3"). InnerVolumeSpecName "kube-api-access-xqvft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.199835 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f214c114-0f86-4fd6-8e9b-5a1aa89204f3" (UID: "f214c114-0f86-4fd6-8e9b-5a1aa89204f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.222566 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqvft\" (UniqueName: \"kubernetes.io/projected/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-kube-api-access-xqvft\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.222600 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.222610 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f214c114-0f86-4fd6-8e9b-5a1aa89204f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.420705 4834 generic.go:334] "Generic (PLEG): container finished" podID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerID="686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38" exitCode=0 Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.420749 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wst42" event={"ID":"f214c114-0f86-4fd6-8e9b-5a1aa89204f3","Type":"ContainerDied","Data":"686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38"} Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.420779 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wst42" event={"ID":"f214c114-0f86-4fd6-8e9b-5a1aa89204f3","Type":"ContainerDied","Data":"8bf763636a6c31e7efcb0c7c0751caef3051bfb0da0fc42cca82b87252451ba4"} Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.420798 4834 scope.go:117] "RemoveContainer" containerID="686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.420806 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wst42" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.444025 4834 scope.go:117] "RemoveContainer" containerID="bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.457386 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wst42"] Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.466229 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wst42"] Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.478752 4834 scope.go:117] "RemoveContainer" containerID="14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.510863 4834 scope.go:117] "RemoveContainer" containerID="686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38" Jan 21 16:31:07 crc kubenswrapper[4834]: E0121 16:31:07.511398 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38\": container with ID starting with 686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38 not found: ID does not exist" containerID="686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.511439 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38"} err="failed to get container status \"686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38\": rpc error: code = NotFound desc = could not find container \"686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38\": container with ID starting with 686be2ac9ac7362dd9233e786db5983b32e4f4735ffe9bd06964b95fbd5dde38 not found: ID does not exist" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.511468 4834 scope.go:117] "RemoveContainer" containerID="bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645" Jan 21 16:31:07 crc kubenswrapper[4834]: E0121 16:31:07.511901 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645\": container with ID starting with bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645 not found: ID does not exist" containerID="bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.512010 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645"} err="failed to get container status \"bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645\": rpc error: code = NotFound desc = could not find container \"bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645\": container with ID starting with bfba69c2c61d81ce1874decd8f4f65c9a6b23a3d967e821d9bd5d775e7d6f645 not found: ID does not exist" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.512065 4834 scope.go:117] "RemoveContainer" containerID="14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46" Jan 21 16:31:07 crc kubenswrapper[4834]: E0121 16:31:07.512515 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46\": container with ID starting with 14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46 not found: ID does not exist" containerID="14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46" Jan 21 16:31:07 crc kubenswrapper[4834]: I0121 16:31:07.512559 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46"} err="failed to get container status \"14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46\": rpc error: code = NotFound desc = could not find container \"14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46\": container with ID starting with 14fb6730454dfe657ef9927715001503e545552995c86eee8fb1b508a4b76f46 not found: ID does not exist" Jan 21 16:31:08 crc kubenswrapper[4834]: I0121 16:31:08.330067 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:31:08 crc kubenswrapper[4834]: E0121 16:31:08.330404 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:31:08 crc kubenswrapper[4834]: I0121 16:31:08.344501 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" path="/var/lib/kubelet/pods/f214c114-0f86-4fd6-8e9b-5a1aa89204f3/volumes" Jan 21 16:31:23 crc kubenswrapper[4834]: I0121 16:31:23.324761 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:31:23 crc kubenswrapper[4834]: E0121 16:31:23.325560 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:31:37 crc kubenswrapper[4834]: I0121 16:31:37.325784 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:31:37 crc kubenswrapper[4834]: E0121 16:31:37.326640 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:31:52 crc kubenswrapper[4834]: I0121 16:31:52.324975 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:31:52 crc kubenswrapper[4834]: E0121 16:31:52.326423 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:32:03 crc kubenswrapper[4834]: I0121 16:32:03.325706 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:32:03 crc kubenswrapper[4834]: E0121 16:32:03.326712 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.522094 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2n4ph"] Jan 21 16:32:08 crc kubenswrapper[4834]: E0121 16:32:08.523174 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerName="registry-server" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.523195 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerName="registry-server" Jan 21 16:32:08 crc kubenswrapper[4834]: E0121 16:32:08.523221 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerName="extract-content" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.523230 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerName="extract-content" Jan 21 16:32:08 crc kubenswrapper[4834]: E0121 16:32:08.523263 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerName="extract-content" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.523270 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerName="extract-content" Jan 21 16:32:08 crc kubenswrapper[4834]: E0121 16:32:08.523302 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerName="extract-utilities" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.523308 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerName="extract-utilities" Jan 21 16:32:08 crc kubenswrapper[4834]: E0121 16:32:08.523315 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerName="extract-utilities" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.523322 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerName="extract-utilities" Jan 21 16:32:08 crc kubenswrapper[4834]: E0121 16:32:08.523336 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerName="registry-server" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.523342 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerName="registry-server" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.523556 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="f214c114-0f86-4fd6-8e9b-5a1aa89204f3" containerName="registry-server" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.523588 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd8eaba-381c-4490-a333-10b92dc1d8ad" containerName="registry-server" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.525621 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.533417 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n4ph"] Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.667456 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh8gz\" (UniqueName: \"kubernetes.io/projected/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-kube-api-access-dh8gz\") pod \"redhat-marketplace-2n4ph\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.667835 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-utilities\") pod \"redhat-marketplace-2n4ph\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.667998 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-catalog-content\") pod \"redhat-marketplace-2n4ph\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.771119 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh8gz\" (UniqueName: \"kubernetes.io/projected/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-kube-api-access-dh8gz\") pod \"redhat-marketplace-2n4ph\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.771175 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-utilities\") pod \"redhat-marketplace-2n4ph\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.771226 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-catalog-content\") pod \"redhat-marketplace-2n4ph\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.772024 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-catalog-content\") pod \"redhat-marketplace-2n4ph\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.772772 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-utilities\") pod \"redhat-marketplace-2n4ph\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.795336 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh8gz\" (UniqueName: \"kubernetes.io/projected/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-kube-api-access-dh8gz\") pod \"redhat-marketplace-2n4ph\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:08 crc kubenswrapper[4834]: I0121 16:32:08.846817 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:09 crc kubenswrapper[4834]: I0121 16:32:09.369840 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n4ph"] Jan 21 16:32:10 crc kubenswrapper[4834]: I0121 16:32:10.024363 4834 generic.go:334] "Generic (PLEG): container finished" podID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerID="7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033" exitCode=0 Jan 21 16:32:10 crc kubenswrapper[4834]: I0121 16:32:10.024474 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n4ph" event={"ID":"a2bd8817-f0fa-49e4-b22d-c5a4872be28f","Type":"ContainerDied","Data":"7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033"} Jan 21 16:32:10 crc kubenswrapper[4834]: I0121 16:32:10.024691 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n4ph" event={"ID":"a2bd8817-f0fa-49e4-b22d-c5a4872be28f","Type":"ContainerStarted","Data":"2acfec17bfa2d806ef7f9943bf3f491e6c3b2b0b0a74d69922977588ece0d2e5"} Jan 21 16:32:11 crc kubenswrapper[4834]: I0121 16:32:11.038523 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n4ph" event={"ID":"a2bd8817-f0fa-49e4-b22d-c5a4872be28f","Type":"ContainerStarted","Data":"b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f"} Jan 21 16:32:12 crc kubenswrapper[4834]: I0121 16:32:12.050953 4834 generic.go:334] "Generic (PLEG): container finished" podID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerID="b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f" exitCode=0 Jan 21 16:32:12 crc kubenswrapper[4834]: I0121 16:32:12.051083 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n4ph" event={"ID":"a2bd8817-f0fa-49e4-b22d-c5a4872be28f","Type":"ContainerDied","Data":"b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f"} Jan 21 16:32:14 crc kubenswrapper[4834]: I0121 16:32:14.072058 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n4ph" event={"ID":"a2bd8817-f0fa-49e4-b22d-c5a4872be28f","Type":"ContainerStarted","Data":"faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e"} Jan 21 16:32:14 crc kubenswrapper[4834]: I0121 16:32:14.100231 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2n4ph" podStartSLOduration=2.975973134 podStartE2EDuration="6.100207163s" podCreationTimestamp="2026-01-21 16:32:08 +0000 UTC" firstStartedPulling="2026-01-21 16:32:10.026183843 +0000 UTC m=+7276.000532888" lastFinishedPulling="2026-01-21 16:32:13.150417872 +0000 UTC m=+7279.124766917" observedRunningTime="2026-01-21 16:32:14.091248183 +0000 UTC m=+7280.065597228" watchObservedRunningTime="2026-01-21 16:32:14.100207163 +0000 UTC m=+7280.074556208" Jan 21 16:32:15 crc kubenswrapper[4834]: I0121 16:32:15.325180 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:32:15 crc kubenswrapper[4834]: E0121 16:32:15.325825 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:32:18 crc kubenswrapper[4834]: I0121 16:32:18.847192 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:18 crc kubenswrapper[4834]: I0121 16:32:18.847837 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:18 crc kubenswrapper[4834]: I0121 16:32:18.893127 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:19 crc kubenswrapper[4834]: I0121 16:32:19.189199 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:19 crc kubenswrapper[4834]: I0121 16:32:19.254438 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n4ph"] Jan 21 16:32:21 crc kubenswrapper[4834]: I0121 16:32:21.153404 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2n4ph" podUID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerName="registry-server" containerID="cri-o://faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e" gracePeriod=2 Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.048623 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.164757 4834 generic.go:334] "Generic (PLEG): container finished" podID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerID="faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e" exitCode=0 Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.164800 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n4ph" event={"ID":"a2bd8817-f0fa-49e4-b22d-c5a4872be28f","Type":"ContainerDied","Data":"faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e"} Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.164832 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n4ph" event={"ID":"a2bd8817-f0fa-49e4-b22d-c5a4872be28f","Type":"ContainerDied","Data":"2acfec17bfa2d806ef7f9943bf3f491e6c3b2b0b0a74d69922977588ece0d2e5"} Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.164876 4834 scope.go:117] "RemoveContainer" containerID="faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.165915 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n4ph" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.176513 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-catalog-content\") pod \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.176858 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh8gz\" (UniqueName: \"kubernetes.io/projected/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-kube-api-access-dh8gz\") pod \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.177055 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-utilities\") pod \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\" (UID: \"a2bd8817-f0fa-49e4-b22d-c5a4872be28f\") " Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.178237 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-utilities" (OuterVolumeSpecName: "utilities") pod "a2bd8817-f0fa-49e4-b22d-c5a4872be28f" (UID: "a2bd8817-f0fa-49e4-b22d-c5a4872be28f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.179131 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.182400 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-kube-api-access-dh8gz" (OuterVolumeSpecName: "kube-api-access-dh8gz") pod "a2bd8817-f0fa-49e4-b22d-c5a4872be28f" (UID: "a2bd8817-f0fa-49e4-b22d-c5a4872be28f"). InnerVolumeSpecName "kube-api-access-dh8gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.183637 4834 scope.go:117] "RemoveContainer" containerID="b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.202940 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2bd8817-f0fa-49e4-b22d-c5a4872be28f" (UID: "a2bd8817-f0fa-49e4-b22d-c5a4872be28f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.248036 4834 scope.go:117] "RemoveContainer" containerID="7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.281359 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.281630 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh8gz\" (UniqueName: \"kubernetes.io/projected/a2bd8817-f0fa-49e4-b22d-c5a4872be28f-kube-api-access-dh8gz\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.297121 4834 scope.go:117] "RemoveContainer" containerID="faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e" Jan 21 16:32:22 crc kubenswrapper[4834]: E0121 16:32:22.297651 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e\": container with ID starting with faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e not found: ID does not exist" containerID="faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.297786 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e"} err="failed to get container status \"faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e\": rpc error: code = NotFound desc = could not find container \"faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e\": container with ID starting with faac1e4fff26c8d7f337d153bca2aabdc3a4cda3deddb831873bda5bec71b19e not found: ID does not exist" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.297888 4834 scope.go:117] "RemoveContainer" containerID="b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f" Jan 21 16:32:22 crc kubenswrapper[4834]: E0121 16:32:22.298656 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f\": container with ID starting with b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f not found: ID does not exist" containerID="b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.298694 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f"} err="failed to get container status \"b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f\": rpc error: code = NotFound desc = could not find container \"b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f\": container with ID starting with b474513450eb66c6fbd66ef2d71702423efb8d3c897f2d34ac33a8e5454de84f not found: ID does not exist" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.298717 4834 scope.go:117] "RemoveContainer" containerID="7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033" Jan 21 16:32:22 crc kubenswrapper[4834]: E0121 16:32:22.299026 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033\": container with ID starting with 7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033 not found: ID does not exist" containerID="7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.299134 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033"} err="failed to get container status \"7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033\": rpc error: code = NotFound desc = could not find container \"7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033\": container with ID starting with 7dec3ecc72136d7b0671213f80445446932fdd1bfe327e3c8764184d90dbf033 not found: ID does not exist" Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.490341 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n4ph"] Jan 21 16:32:22 crc kubenswrapper[4834]: I0121 16:32:22.498605 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n4ph"] Jan 21 16:32:24 crc kubenswrapper[4834]: I0121 16:32:24.353951 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" path="/var/lib/kubelet/pods/a2bd8817-f0fa-49e4-b22d-c5a4872be28f/volumes" Jan 21 16:32:26 crc kubenswrapper[4834]: I0121 16:32:26.325221 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:32:26 crc kubenswrapper[4834]: E0121 16:32:26.326285 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:32:38 crc kubenswrapper[4834]: I0121 16:32:38.334371 4834 generic.go:334] "Generic (PLEG): container finished" podID="3fc69335-1de8-4e41-a128-cc4f162719f1" containerID="bded3b5d8797c25e5a45cc34ea30c3853bbaa4659e7415dd717eff5bdeb812a0" exitCode=0 Jan 21 16:32:38 crc kubenswrapper[4834]: I0121 16:32:38.355658 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" event={"ID":"3fc69335-1de8-4e41-a128-cc4f162719f1","Type":"ContainerDied","Data":"bded3b5d8797c25e5a45cc34ea30c3853bbaa4659e7415dd717eff5bdeb812a0"} Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.768562 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.871942 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ssh-key-openstack-cell1\") pod \"3fc69335-1de8-4e41-a128-cc4f162719f1\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.872156 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ceph\") pod \"3fc69335-1de8-4e41-a128-cc4f162719f1\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.872331 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-bootstrap-combined-ca-bundle\") pod \"3fc69335-1de8-4e41-a128-cc4f162719f1\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.872405 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-inventory\") pod \"3fc69335-1de8-4e41-a128-cc4f162719f1\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.872480 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7lvf\" (UniqueName: \"kubernetes.io/projected/3fc69335-1de8-4e41-a128-cc4f162719f1-kube-api-access-n7lvf\") pod \"3fc69335-1de8-4e41-a128-cc4f162719f1\" (UID: \"3fc69335-1de8-4e41-a128-cc4f162719f1\") " Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.879765 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ceph" (OuterVolumeSpecName: "ceph") pod "3fc69335-1de8-4e41-a128-cc4f162719f1" (UID: "3fc69335-1de8-4e41-a128-cc4f162719f1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.880850 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3fc69335-1de8-4e41-a128-cc4f162719f1" (UID: "3fc69335-1de8-4e41-a128-cc4f162719f1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.881379 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc69335-1de8-4e41-a128-cc4f162719f1-kube-api-access-n7lvf" (OuterVolumeSpecName: "kube-api-access-n7lvf") pod "3fc69335-1de8-4e41-a128-cc4f162719f1" (UID: "3fc69335-1de8-4e41-a128-cc4f162719f1"). InnerVolumeSpecName "kube-api-access-n7lvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.908217 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-inventory" (OuterVolumeSpecName: "inventory") pod "3fc69335-1de8-4e41-a128-cc4f162719f1" (UID: "3fc69335-1de8-4e41-a128-cc4f162719f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.927839 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "3fc69335-1de8-4e41-a128-cc4f162719f1" (UID: "3fc69335-1de8-4e41-a128-cc4f162719f1"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.975345 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.975390 4834 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.975402 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.975413 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7lvf\" (UniqueName: \"kubernetes.io/projected/3fc69335-1de8-4e41-a128-cc4f162719f1-kube-api-access-n7lvf\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:39 crc kubenswrapper[4834]: I0121 16:32:39.975422 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/3fc69335-1de8-4e41-a128-cc4f162719f1-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.369437 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" event={"ID":"3fc69335-1de8-4e41-a128-cc4f162719f1","Type":"ContainerDied","Data":"bf4694c4e21103fff55f51bfb5ba4d5d5fcea12b0de99bccf256a80977978e6d"} Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.369703 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf4694c4e21103fff55f51bfb5ba4d5d5fcea12b0de99bccf256a80977978e6d" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.369495 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-sgmzf" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.474531 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-xgmnn"] Jan 21 16:32:40 crc kubenswrapper[4834]: E0121 16:32:40.475339 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerName="extract-content" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.475560 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerName="extract-content" Jan 21 16:32:40 crc kubenswrapper[4834]: E0121 16:32:40.475696 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerName="registry-server" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.475812 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerName="registry-server" Jan 21 16:32:40 crc kubenswrapper[4834]: E0121 16:32:40.475899 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerName="extract-utilities" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.476019 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerName="extract-utilities" Jan 21 16:32:40 crc kubenswrapper[4834]: E0121 16:32:40.476143 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc69335-1de8-4e41-a128-cc4f162719f1" containerName="bootstrap-openstack-openstack-cell1" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.476217 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc69335-1de8-4e41-a128-cc4f162719f1" containerName="bootstrap-openstack-openstack-cell1" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.476576 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2bd8817-f0fa-49e4-b22d-c5a4872be28f" containerName="registry-server" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.476666 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc69335-1de8-4e41-a128-cc4f162719f1" containerName="bootstrap-openstack-openstack-cell1" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.477606 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.480356 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.480704 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.480907 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.481204 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.488633 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-xgmnn"] Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.588792 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ceph\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.588890 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkqhn\" (UniqueName: \"kubernetes.io/projected/2dfefb4c-526c-45fc-a398-41a79b28ac0b-kube-api-access-kkqhn\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.588948 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-inventory\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.589027 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.691385 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-inventory\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.691570 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.692555 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ceph\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.692733 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkqhn\" (UniqueName: \"kubernetes.io/projected/2dfefb4c-526c-45fc-a398-41a79b28ac0b-kube-api-access-kkqhn\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.696500 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ceph\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.696512 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.697092 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-inventory\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.722652 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkqhn\" (UniqueName: \"kubernetes.io/projected/2dfefb4c-526c-45fc-a398-41a79b28ac0b-kube-api-access-kkqhn\") pod \"download-cache-openstack-openstack-cell1-xgmnn\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:40 crc kubenswrapper[4834]: I0121 16:32:40.799652 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:32:41 crc kubenswrapper[4834]: I0121 16:32:41.324811 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:32:41 crc kubenswrapper[4834]: E0121 16:32:41.325766 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:32:41 crc kubenswrapper[4834]: I0121 16:32:41.392372 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-xgmnn"] Jan 21 16:32:42 crc kubenswrapper[4834]: I0121 16:32:42.394709 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" event={"ID":"2dfefb4c-526c-45fc-a398-41a79b28ac0b","Type":"ContainerStarted","Data":"be81c7ecd869193fbce8d244786ca193abd8a8f4342fcd905766ead178b1ec0d"} Jan 21 16:32:42 crc kubenswrapper[4834]: I0121 16:32:42.395083 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" event={"ID":"2dfefb4c-526c-45fc-a398-41a79b28ac0b","Type":"ContainerStarted","Data":"e0dadb62bf7417fd6b766448f40c453e0aeb7269bd42177c4c7ebd49b347b6ea"} Jan 21 16:32:42 crc kubenswrapper[4834]: I0121 16:32:42.420296 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" podStartSLOduration=1.971600571 podStartE2EDuration="2.420275125s" podCreationTimestamp="2026-01-21 16:32:40 +0000 UTC" firstStartedPulling="2026-01-21 16:32:41.395682712 +0000 UTC m=+7307.370031757" lastFinishedPulling="2026-01-21 16:32:41.844357256 +0000 UTC m=+7307.818706311" observedRunningTime="2026-01-21 16:32:42.41312383 +0000 UTC m=+7308.387472885" watchObservedRunningTime="2026-01-21 16:32:42.420275125 +0000 UTC m=+7308.394624170" Jan 21 16:32:53 crc kubenswrapper[4834]: I0121 16:32:53.325676 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:32:53 crc kubenswrapper[4834]: E0121 16:32:53.326724 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:33:04 crc kubenswrapper[4834]: I0121 16:33:04.333092 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:33:04 crc kubenswrapper[4834]: E0121 16:33:04.334158 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:33:15 crc kubenswrapper[4834]: I0121 16:33:15.325590 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:33:15 crc kubenswrapper[4834]: E0121 16:33:15.326442 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:33:27 crc kubenswrapper[4834]: I0121 16:33:27.324759 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:33:27 crc kubenswrapper[4834]: E0121 16:33:27.325695 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:33:41 crc kubenswrapper[4834]: I0121 16:33:41.325362 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:33:41 crc kubenswrapper[4834]: E0121 16:33:41.326180 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:33:56 crc kubenswrapper[4834]: I0121 16:33:56.324633 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:33:57 crc kubenswrapper[4834]: I0121 16:33:57.141202 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"b51f50c4e7c6b4b1c2488cbfa3e9765f471f15d32a3eb55c08dff4d01edf5b40"} Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.497768 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5f4cw"] Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.503751 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.506872 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5f4cw"] Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.660615 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mwvd\" (UniqueName: \"kubernetes.io/projected/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-kube-api-access-9mwvd\") pod \"certified-operators-5f4cw\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.661048 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-utilities\") pod \"certified-operators-5f4cw\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.661227 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-catalog-content\") pod \"certified-operators-5f4cw\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.763143 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-utilities\") pod \"certified-operators-5f4cw\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.763234 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-catalog-content\") pod \"certified-operators-5f4cw\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.763336 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mwvd\" (UniqueName: \"kubernetes.io/projected/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-kube-api-access-9mwvd\") pod \"certified-operators-5f4cw\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.764218 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-catalog-content\") pod \"certified-operators-5f4cw\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.764225 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-utilities\") pod \"certified-operators-5f4cw\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.783034 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mwvd\" (UniqueName: \"kubernetes.io/projected/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-kube-api-access-9mwvd\") pod \"certified-operators-5f4cw\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:18 crc kubenswrapper[4834]: I0121 16:34:18.845710 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:19 crc kubenswrapper[4834]: I0121 16:34:19.368982 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5f4cw"] Jan 21 16:34:20 crc kubenswrapper[4834]: I0121 16:34:20.365647 4834 generic.go:334] "Generic (PLEG): container finished" podID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerID="91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1" exitCode=0 Jan 21 16:34:20 crc kubenswrapper[4834]: I0121 16:34:20.365760 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f4cw" event={"ID":"891ff9ba-080b-4ebd-8b54-c6098b28d5f9","Type":"ContainerDied","Data":"91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1"} Jan 21 16:34:20 crc kubenswrapper[4834]: I0121 16:34:20.366177 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f4cw" event={"ID":"891ff9ba-080b-4ebd-8b54-c6098b28d5f9","Type":"ContainerStarted","Data":"a34963dcce5b575050104b250584a6ccbc4798d66596cf59b0b20451bc644139"} Jan 21 16:34:20 crc kubenswrapper[4834]: I0121 16:34:20.368587 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:34:22 crc kubenswrapper[4834]: I0121 16:34:22.390672 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f4cw" event={"ID":"891ff9ba-080b-4ebd-8b54-c6098b28d5f9","Type":"ContainerStarted","Data":"eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1"} Jan 21 16:34:23 crc kubenswrapper[4834]: I0121 16:34:23.403006 4834 generic.go:334] "Generic (PLEG): container finished" podID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerID="eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1" exitCode=0 Jan 21 16:34:23 crc kubenswrapper[4834]: I0121 16:34:23.403118 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f4cw" event={"ID":"891ff9ba-080b-4ebd-8b54-c6098b28d5f9","Type":"ContainerDied","Data":"eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1"} Jan 21 16:34:24 crc kubenswrapper[4834]: I0121 16:34:24.417622 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f4cw" event={"ID":"891ff9ba-080b-4ebd-8b54-c6098b28d5f9","Type":"ContainerStarted","Data":"ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789"} Jan 21 16:34:24 crc kubenswrapper[4834]: I0121 16:34:24.449718 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5f4cw" podStartSLOduration=2.798208584 podStartE2EDuration="6.449684772s" podCreationTimestamp="2026-01-21 16:34:18 +0000 UTC" firstStartedPulling="2026-01-21 16:34:20.368236699 +0000 UTC m=+7406.342585744" lastFinishedPulling="2026-01-21 16:34:24.019712877 +0000 UTC m=+7409.994061932" observedRunningTime="2026-01-21 16:34:24.43909792 +0000 UTC m=+7410.413447005" watchObservedRunningTime="2026-01-21 16:34:24.449684772 +0000 UTC m=+7410.424033807" Jan 21 16:34:26 crc kubenswrapper[4834]: I0121 16:34:26.439147 4834 generic.go:334] "Generic (PLEG): container finished" podID="2dfefb4c-526c-45fc-a398-41a79b28ac0b" containerID="be81c7ecd869193fbce8d244786ca193abd8a8f4342fcd905766ead178b1ec0d" exitCode=0 Jan 21 16:34:26 crc kubenswrapper[4834]: I0121 16:34:26.439225 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" event={"ID":"2dfefb4c-526c-45fc-a398-41a79b28ac0b","Type":"ContainerDied","Data":"be81c7ecd869193fbce8d244786ca193abd8a8f4342fcd905766ead178b1ec0d"} Jan 21 16:34:27 crc kubenswrapper[4834]: I0121 16:34:27.924094 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.003943 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-inventory\") pod \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.004089 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ssh-key-openstack-cell1\") pod \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.005583 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkqhn\" (UniqueName: \"kubernetes.io/projected/2dfefb4c-526c-45fc-a398-41a79b28ac0b-kube-api-access-kkqhn\") pod \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.005791 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ceph\") pod \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\" (UID: \"2dfefb4c-526c-45fc-a398-41a79b28ac0b\") " Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.009988 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ceph" (OuterVolumeSpecName: "ceph") pod "2dfefb4c-526c-45fc-a398-41a79b28ac0b" (UID: "2dfefb4c-526c-45fc-a398-41a79b28ac0b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.012755 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfefb4c-526c-45fc-a398-41a79b28ac0b-kube-api-access-kkqhn" (OuterVolumeSpecName: "kube-api-access-kkqhn") pod "2dfefb4c-526c-45fc-a398-41a79b28ac0b" (UID: "2dfefb4c-526c-45fc-a398-41a79b28ac0b"). InnerVolumeSpecName "kube-api-access-kkqhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.036719 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-inventory" (OuterVolumeSpecName: "inventory") pod "2dfefb4c-526c-45fc-a398-41a79b28ac0b" (UID: "2dfefb4c-526c-45fc-a398-41a79b28ac0b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.041653 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2dfefb4c-526c-45fc-a398-41a79b28ac0b" (UID: "2dfefb4c-526c-45fc-a398-41a79b28ac0b"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.110504 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.111078 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.111158 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkqhn\" (UniqueName: \"kubernetes.io/projected/2dfefb4c-526c-45fc-a398-41a79b28ac0b-kube-api-access-kkqhn\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.111233 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2dfefb4c-526c-45fc-a398-41a79b28ac0b-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.471574 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" event={"ID":"2dfefb4c-526c-45fc-a398-41a79b28ac0b","Type":"ContainerDied","Data":"e0dadb62bf7417fd6b766448f40c453e0aeb7269bd42177c4c7ebd49b347b6ea"} Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.471624 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0dadb62bf7417fd6b766448f40c453e0aeb7269bd42177c4c7ebd49b347b6ea" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.471639 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-xgmnn" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.553318 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-jvtmv"] Jan 21 16:34:28 crc kubenswrapper[4834]: E0121 16:34:28.554250 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfefb4c-526c-45fc-a398-41a79b28ac0b" containerName="download-cache-openstack-openstack-cell1" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.554283 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfefb4c-526c-45fc-a398-41a79b28ac0b" containerName="download-cache-openstack-openstack-cell1" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.554557 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfefb4c-526c-45fc-a398-41a79b28ac0b" containerName="download-cache-openstack-openstack-cell1" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.555534 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.558235 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.558313 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.558661 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.559081 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.564542 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-jvtmv"] Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.726070 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ceph\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.726713 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.727216 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-inventory\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.727594 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfzsx\" (UniqueName: \"kubernetes.io/projected/ff895217-df84-4fe6-a466-7f38685ed926-kube-api-access-qfzsx\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.830120 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfzsx\" (UniqueName: \"kubernetes.io/projected/ff895217-df84-4fe6-a466-7f38685ed926-kube-api-access-qfzsx\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.830240 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ceph\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.830318 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.830372 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-inventory\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.835518 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-inventory\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.836775 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.846177 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.846244 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.848237 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ceph\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.852881 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfzsx\" (UniqueName: \"kubernetes.io/projected/ff895217-df84-4fe6-a466-7f38685ed926-kube-api-access-qfzsx\") pod \"configure-network-openstack-openstack-cell1-jvtmv\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.879135 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:34:28 crc kubenswrapper[4834]: I0121 16:34:28.907813 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:29 crc kubenswrapper[4834]: I0121 16:34:29.460771 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-jvtmv"] Jan 21 16:34:29 crc kubenswrapper[4834]: I0121 16:34:29.481490 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" event={"ID":"ff895217-df84-4fe6-a466-7f38685ed926","Type":"ContainerStarted","Data":"c67f60891b5c7dd4129c590079898be2e246ddca69630c49cfc0d526d20dec96"} Jan 21 16:34:29 crc kubenswrapper[4834]: I0121 16:34:29.527906 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:29 crc kubenswrapper[4834]: I0121 16:34:29.574852 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5f4cw"] Jan 21 16:34:30 crc kubenswrapper[4834]: I0121 16:34:30.492002 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" event={"ID":"ff895217-df84-4fe6-a466-7f38685ed926","Type":"ContainerStarted","Data":"be829d8240c2f493a0f8825fc7fb61c5e30714ebb083b327832a0bf24e0ac8c8"} Jan 21 16:34:30 crc kubenswrapper[4834]: I0121 16:34:30.510645 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" podStartSLOduration=1.7756598449999998 podStartE2EDuration="2.51062727s" podCreationTimestamp="2026-01-21 16:34:28 +0000 UTC" firstStartedPulling="2026-01-21 16:34:29.465016778 +0000 UTC m=+7415.439365823" lastFinishedPulling="2026-01-21 16:34:30.199984203 +0000 UTC m=+7416.174333248" observedRunningTime="2026-01-21 16:34:30.507527752 +0000 UTC m=+7416.481876797" watchObservedRunningTime="2026-01-21 16:34:30.51062727 +0000 UTC m=+7416.484976315" Jan 21 16:34:31 crc kubenswrapper[4834]: I0121 16:34:31.499606 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5f4cw" podUID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerName="registry-server" containerID="cri-o://ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789" gracePeriod=2 Jan 21 16:34:31 crc kubenswrapper[4834]: I0121 16:34:31.984841 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.103560 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-utilities\") pod \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.103749 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mwvd\" (UniqueName: \"kubernetes.io/projected/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-kube-api-access-9mwvd\") pod \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.103817 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-catalog-content\") pod \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\" (UID: \"891ff9ba-080b-4ebd-8b54-c6098b28d5f9\") " Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.104541 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-utilities" (OuterVolumeSpecName: "utilities") pod "891ff9ba-080b-4ebd-8b54-c6098b28d5f9" (UID: "891ff9ba-080b-4ebd-8b54-c6098b28d5f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.110597 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-kube-api-access-9mwvd" (OuterVolumeSpecName: "kube-api-access-9mwvd") pod "891ff9ba-080b-4ebd-8b54-c6098b28d5f9" (UID: "891ff9ba-080b-4ebd-8b54-c6098b28d5f9"). InnerVolumeSpecName "kube-api-access-9mwvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.149140 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "891ff9ba-080b-4ebd-8b54-c6098b28d5f9" (UID: "891ff9ba-080b-4ebd-8b54-c6098b28d5f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.206510 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.206562 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mwvd\" (UniqueName: \"kubernetes.io/projected/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-kube-api-access-9mwvd\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.206577 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891ff9ba-080b-4ebd-8b54-c6098b28d5f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.515081 4834 generic.go:334] "Generic (PLEG): container finished" podID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerID="ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789" exitCode=0 Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.515160 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5f4cw" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.515179 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f4cw" event={"ID":"891ff9ba-080b-4ebd-8b54-c6098b28d5f9","Type":"ContainerDied","Data":"ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789"} Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.515533 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5f4cw" event={"ID":"891ff9ba-080b-4ebd-8b54-c6098b28d5f9","Type":"ContainerDied","Data":"a34963dcce5b575050104b250584a6ccbc4798d66596cf59b0b20451bc644139"} Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.515560 4834 scope.go:117] "RemoveContainer" containerID="ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.546236 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5f4cw"] Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.557226 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5f4cw"] Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.559583 4834 scope.go:117] "RemoveContainer" containerID="eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.589330 4834 scope.go:117] "RemoveContainer" containerID="91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.650315 4834 scope.go:117] "RemoveContainer" containerID="ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789" Jan 21 16:34:32 crc kubenswrapper[4834]: E0121 16:34:32.651203 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789\": container with ID starting with ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789 not found: ID does not exist" containerID="ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.651257 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789"} err="failed to get container status \"ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789\": rpc error: code = NotFound desc = could not find container \"ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789\": container with ID starting with ef912870cc6c99eb839161a3945ee7203eb51492854a0cc15f9355c6d51b5789 not found: ID does not exist" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.651290 4834 scope.go:117] "RemoveContainer" containerID="eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1" Jan 21 16:34:32 crc kubenswrapper[4834]: E0121 16:34:32.651747 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1\": container with ID starting with eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1 not found: ID does not exist" containerID="eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.651810 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1"} err="failed to get container status \"eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1\": rpc error: code = NotFound desc = could not find container \"eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1\": container with ID starting with eec44bff8d27ac3c49154e8c85f8970c11600c3345c752a102bb39fd8bf847b1 not found: ID does not exist" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.651854 4834 scope.go:117] "RemoveContainer" containerID="91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1" Jan 21 16:34:32 crc kubenswrapper[4834]: E0121 16:34:32.652312 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1\": container with ID starting with 91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1 not found: ID does not exist" containerID="91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1" Jan 21 16:34:32 crc kubenswrapper[4834]: I0121 16:34:32.652365 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1"} err="failed to get container status \"91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1\": rpc error: code = NotFound desc = could not find container \"91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1\": container with ID starting with 91e0a2974b62aeb820e7a83425ad31d7f5c61657a2b57caa0dc4e1ca9f77b4c1 not found: ID does not exist" Jan 21 16:34:34 crc kubenswrapper[4834]: I0121 16:34:34.340838 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" path="/var/lib/kubelet/pods/891ff9ba-080b-4ebd-8b54-c6098b28d5f9/volumes" Jan 21 16:35:53 crc kubenswrapper[4834]: I0121 16:35:53.299974 4834 generic.go:334] "Generic (PLEG): container finished" podID="ff895217-df84-4fe6-a466-7f38685ed926" containerID="be829d8240c2f493a0f8825fc7fb61c5e30714ebb083b327832a0bf24e0ac8c8" exitCode=0 Jan 21 16:35:53 crc kubenswrapper[4834]: I0121 16:35:53.299993 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" event={"ID":"ff895217-df84-4fe6-a466-7f38685ed926","Type":"ContainerDied","Data":"be829d8240c2f493a0f8825fc7fb61c5e30714ebb083b327832a0bf24e0ac8c8"} Jan 21 16:35:54 crc kubenswrapper[4834]: I0121 16:35:54.791491 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:35:54 crc kubenswrapper[4834]: I0121 16:35:54.898713 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ceph\") pod \"ff895217-df84-4fe6-a466-7f38685ed926\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " Jan 21 16:35:54 crc kubenswrapper[4834]: I0121 16:35:54.898905 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfzsx\" (UniqueName: \"kubernetes.io/projected/ff895217-df84-4fe6-a466-7f38685ed926-kube-api-access-qfzsx\") pod \"ff895217-df84-4fe6-a466-7f38685ed926\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " Jan 21 16:35:54 crc kubenswrapper[4834]: I0121 16:35:54.898989 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-inventory\") pod \"ff895217-df84-4fe6-a466-7f38685ed926\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " Jan 21 16:35:54 crc kubenswrapper[4834]: I0121 16:35:54.899186 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ssh-key-openstack-cell1\") pod \"ff895217-df84-4fe6-a466-7f38685ed926\" (UID: \"ff895217-df84-4fe6-a466-7f38685ed926\") " Jan 21 16:35:54 crc kubenswrapper[4834]: I0121 16:35:54.904125 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff895217-df84-4fe6-a466-7f38685ed926-kube-api-access-qfzsx" (OuterVolumeSpecName: "kube-api-access-qfzsx") pod "ff895217-df84-4fe6-a466-7f38685ed926" (UID: "ff895217-df84-4fe6-a466-7f38685ed926"). InnerVolumeSpecName "kube-api-access-qfzsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:35:54 crc kubenswrapper[4834]: I0121 16:35:54.904156 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ceph" (OuterVolumeSpecName: "ceph") pod "ff895217-df84-4fe6-a466-7f38685ed926" (UID: "ff895217-df84-4fe6-a466-7f38685ed926"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:35:54 crc kubenswrapper[4834]: I0121 16:35:54.927258 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-inventory" (OuterVolumeSpecName: "inventory") pod "ff895217-df84-4fe6-a466-7f38685ed926" (UID: "ff895217-df84-4fe6-a466-7f38685ed926"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:35:54 crc kubenswrapper[4834]: I0121 16:35:54.927279 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ff895217-df84-4fe6-a466-7f38685ed926" (UID: "ff895217-df84-4fe6-a466-7f38685ed926"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.001740 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfzsx\" (UniqueName: \"kubernetes.io/projected/ff895217-df84-4fe6-a466-7f38685ed926-kube-api-access-qfzsx\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.001782 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.001792 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.001801 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff895217-df84-4fe6-a466-7f38685ed926-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.337123 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" event={"ID":"ff895217-df84-4fe6-a466-7f38685ed926","Type":"ContainerDied","Data":"c67f60891b5c7dd4129c590079898be2e246ddca69630c49cfc0d526d20dec96"} Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.337546 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67f60891b5c7dd4129c590079898be2e246ddca69630c49cfc0d526d20dec96" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.337301 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-jvtmv" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.414422 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-xn7mt"] Jan 21 16:35:55 crc kubenswrapper[4834]: E0121 16:35:55.414907 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerName="registry-server" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.414930 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerName="registry-server" Jan 21 16:35:55 crc kubenswrapper[4834]: E0121 16:35:55.414962 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerName="extract-utilities" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.414969 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerName="extract-utilities" Jan 21 16:35:55 crc kubenswrapper[4834]: E0121 16:35:55.414995 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerName="extract-content" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.415002 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerName="extract-content" Jan 21 16:35:55 crc kubenswrapper[4834]: E0121 16:35:55.415028 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff895217-df84-4fe6-a466-7f38685ed926" containerName="configure-network-openstack-openstack-cell1" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.415035 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff895217-df84-4fe6-a466-7f38685ed926" containerName="configure-network-openstack-openstack-cell1" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.415240 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="891ff9ba-080b-4ebd-8b54-c6098b28d5f9" containerName="registry-server" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.415259 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff895217-df84-4fe6-a466-7f38685ed926" containerName="configure-network-openstack-openstack-cell1" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.455159 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.460754 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.461305 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.461607 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.461770 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.549139 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ceph\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.549416 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd2mc\" (UniqueName: \"kubernetes.io/projected/0f96a41b-ab9c-43e0-be2e-e54502919aeb-kube-api-access-nd2mc\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.549498 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-inventory\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.549632 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.553194 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-xn7mt"] Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.651195 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ceph\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.651385 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd2mc\" (UniqueName: \"kubernetes.io/projected/0f96a41b-ab9c-43e0-be2e-e54502919aeb-kube-api-access-nd2mc\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.651449 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-inventory\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.651528 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.658495 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-inventory\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.661720 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.675431 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd2mc\" (UniqueName: \"kubernetes.io/projected/0f96a41b-ab9c-43e0-be2e-e54502919aeb-kube-api-access-nd2mc\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.675786 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ceph\") pod \"validate-network-openstack-openstack-cell1-xn7mt\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:55 crc kubenswrapper[4834]: I0121 16:35:55.781018 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:35:56 crc kubenswrapper[4834]: I0121 16:35:56.346302 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-xn7mt"] Jan 21 16:35:57 crc kubenswrapper[4834]: I0121 16:35:57.354851 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" event={"ID":"0f96a41b-ab9c-43e0-be2e-e54502919aeb","Type":"ContainerStarted","Data":"33d1a98451bb83b9fce77cac2a09900ea58a0b434c9eee23785313dd8439345c"} Jan 21 16:35:59 crc kubenswrapper[4834]: I0121 16:35:59.390842 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" event={"ID":"0f96a41b-ab9c-43e0-be2e-e54502919aeb","Type":"ContainerStarted","Data":"7b8cbc0e1510df0c67107f090d6ab886f23e4b8f410d182e85d0052a0d9ca1f7"} Jan 21 16:35:59 crc kubenswrapper[4834]: I0121 16:35:59.427118 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" podStartSLOduration=1.910159363 podStartE2EDuration="4.427090287s" podCreationTimestamp="2026-01-21 16:35:55 +0000 UTC" firstStartedPulling="2026-01-21 16:35:56.357687121 +0000 UTC m=+7502.332036176" lastFinishedPulling="2026-01-21 16:35:58.874618055 +0000 UTC m=+7504.848967100" observedRunningTime="2026-01-21 16:35:59.420759248 +0000 UTC m=+7505.395108293" watchObservedRunningTime="2026-01-21 16:35:59.427090287 +0000 UTC m=+7505.401439342" Jan 21 16:36:04 crc kubenswrapper[4834]: I0121 16:36:04.433813 4834 generic.go:334] "Generic (PLEG): container finished" podID="0f96a41b-ab9c-43e0-be2e-e54502919aeb" containerID="7b8cbc0e1510df0c67107f090d6ab886f23e4b8f410d182e85d0052a0d9ca1f7" exitCode=0 Jan 21 16:36:04 crc kubenswrapper[4834]: I0121 16:36:04.433891 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" event={"ID":"0f96a41b-ab9c-43e0-be2e-e54502919aeb","Type":"ContainerDied","Data":"7b8cbc0e1510df0c67107f090d6ab886f23e4b8f410d182e85d0052a0d9ca1f7"} Jan 21 16:36:05 crc kubenswrapper[4834]: I0121 16:36:05.899280 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:36:05 crc kubenswrapper[4834]: I0121 16:36:05.982529 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd2mc\" (UniqueName: \"kubernetes.io/projected/0f96a41b-ab9c-43e0-be2e-e54502919aeb-kube-api-access-nd2mc\") pod \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " Jan 21 16:36:05 crc kubenswrapper[4834]: I0121 16:36:05.982614 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-inventory\") pod \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " Jan 21 16:36:05 crc kubenswrapper[4834]: I0121 16:36:05.982722 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ssh-key-openstack-cell1\") pod \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " Jan 21 16:36:05 crc kubenswrapper[4834]: I0121 16:36:05.982810 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ceph\") pod \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\" (UID: \"0f96a41b-ab9c-43e0-be2e-e54502919aeb\") " Jan 21 16:36:05 crc kubenswrapper[4834]: I0121 16:36:05.988861 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ceph" (OuterVolumeSpecName: "ceph") pod "0f96a41b-ab9c-43e0-be2e-e54502919aeb" (UID: "0f96a41b-ab9c-43e0-be2e-e54502919aeb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:05 crc kubenswrapper[4834]: I0121 16:36:05.988886 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f96a41b-ab9c-43e0-be2e-e54502919aeb-kube-api-access-nd2mc" (OuterVolumeSpecName: "kube-api-access-nd2mc") pod "0f96a41b-ab9c-43e0-be2e-e54502919aeb" (UID: "0f96a41b-ab9c-43e0-be2e-e54502919aeb"). InnerVolumeSpecName "kube-api-access-nd2mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.021044 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-inventory" (OuterVolumeSpecName: "inventory") pod "0f96a41b-ab9c-43e0-be2e-e54502919aeb" (UID: "0f96a41b-ab9c-43e0-be2e-e54502919aeb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.022078 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "0f96a41b-ab9c-43e0-be2e-e54502919aeb" (UID: "0f96a41b-ab9c-43e0-be2e-e54502919aeb"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.085837 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd2mc\" (UniqueName: \"kubernetes.io/projected/0f96a41b-ab9c-43e0-be2e-e54502919aeb-kube-api-access-nd2mc\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.085890 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.085907 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.085949 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f96a41b-ab9c-43e0-be2e-e54502919aeb-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.455309 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" event={"ID":"0f96a41b-ab9c-43e0-be2e-e54502919aeb","Type":"ContainerDied","Data":"33d1a98451bb83b9fce77cac2a09900ea58a0b434c9eee23785313dd8439345c"} Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.455356 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d1a98451bb83b9fce77cac2a09900ea58a0b434c9eee23785313dd8439345c" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.455425 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-xn7mt" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.530726 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-k655t"] Jan 21 16:36:06 crc kubenswrapper[4834]: E0121 16:36:06.531437 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f96a41b-ab9c-43e0-be2e-e54502919aeb" containerName="validate-network-openstack-openstack-cell1" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.531459 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f96a41b-ab9c-43e0-be2e-e54502919aeb" containerName="validate-network-openstack-openstack-cell1" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.531694 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f96a41b-ab9c-43e0-be2e-e54502919aeb" containerName="validate-network-openstack-openstack-cell1" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.532709 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.535335 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.537107 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.537528 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.537745 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.551806 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-k655t"] Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.599494 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-248fp\" (UniqueName: \"kubernetes.io/projected/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-kube-api-access-248fp\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.599577 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ceph\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.599762 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.599819 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-inventory\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.701588 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-248fp\" (UniqueName: \"kubernetes.io/projected/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-kube-api-access-248fp\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.701814 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ceph\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.701917 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.701968 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-inventory\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.705884 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-inventory\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.707017 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ceph\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.707194 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.726864 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-248fp\" (UniqueName: \"kubernetes.io/projected/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-kube-api-access-248fp\") pod \"install-os-openstack-openstack-cell1-k655t\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:06 crc kubenswrapper[4834]: I0121 16:36:06.854908 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:07 crc kubenswrapper[4834]: I0121 16:36:07.375152 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-k655t"] Jan 21 16:36:07 crc kubenswrapper[4834]: I0121 16:36:07.465881 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-k655t" event={"ID":"0cef0c11-c99e-4c8a-adc8-b9ee149a0660","Type":"ContainerStarted","Data":"a466d305623aea1e31e0e140b88e30dcd99bf691fe505015b0084d655887a265"} Jan 21 16:36:08 crc kubenswrapper[4834]: I0121 16:36:08.476107 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-k655t" event={"ID":"0cef0c11-c99e-4c8a-adc8-b9ee149a0660","Type":"ContainerStarted","Data":"33511e2f2978ed0a4ffd45d529317f946a22fa75dcd06f3942120003a64ff0ed"} Jan 21 16:36:08 crc kubenswrapper[4834]: I0121 16:36:08.503288 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-k655t" podStartSLOduration=2.0976052689999998 podStartE2EDuration="2.503246009s" podCreationTimestamp="2026-01-21 16:36:06 +0000 UTC" firstStartedPulling="2026-01-21 16:36:07.381359732 +0000 UTC m=+7513.355708777" lastFinishedPulling="2026-01-21 16:36:07.787000482 +0000 UTC m=+7513.761349517" observedRunningTime="2026-01-21 16:36:08.492401768 +0000 UTC m=+7514.466750803" watchObservedRunningTime="2026-01-21 16:36:08.503246009 +0000 UTC m=+7514.477595074" Jan 21 16:36:17 crc kubenswrapper[4834]: I0121 16:36:17.113511 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:36:17 crc kubenswrapper[4834]: I0121 16:36:17.113954 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:36:47 crc kubenswrapper[4834]: I0121 16:36:47.114429 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:36:47 crc kubenswrapper[4834]: I0121 16:36:47.114957 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:36:56 crc kubenswrapper[4834]: I0121 16:36:56.948284 4834 generic.go:334] "Generic (PLEG): container finished" podID="0cef0c11-c99e-4c8a-adc8-b9ee149a0660" containerID="33511e2f2978ed0a4ffd45d529317f946a22fa75dcd06f3942120003a64ff0ed" exitCode=0 Jan 21 16:36:56 crc kubenswrapper[4834]: I0121 16:36:56.948361 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-k655t" event={"ID":"0cef0c11-c99e-4c8a-adc8-b9ee149a0660","Type":"ContainerDied","Data":"33511e2f2978ed0a4ffd45d529317f946a22fa75dcd06f3942120003a64ff0ed"} Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.444321 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.554558 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ssh-key-openstack-cell1\") pod \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.554683 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-inventory\") pod \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.555460 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ceph\") pod \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.555572 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-248fp\" (UniqueName: \"kubernetes.io/projected/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-kube-api-access-248fp\") pod \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\" (UID: \"0cef0c11-c99e-4c8a-adc8-b9ee149a0660\") " Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.560272 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-kube-api-access-248fp" (OuterVolumeSpecName: "kube-api-access-248fp") pod "0cef0c11-c99e-4c8a-adc8-b9ee149a0660" (UID: "0cef0c11-c99e-4c8a-adc8-b9ee149a0660"). InnerVolumeSpecName "kube-api-access-248fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.564896 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ceph" (OuterVolumeSpecName: "ceph") pod "0cef0c11-c99e-4c8a-adc8-b9ee149a0660" (UID: "0cef0c11-c99e-4c8a-adc8-b9ee149a0660"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.584355 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-inventory" (OuterVolumeSpecName: "inventory") pod "0cef0c11-c99e-4c8a-adc8-b9ee149a0660" (UID: "0cef0c11-c99e-4c8a-adc8-b9ee149a0660"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.584816 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "0cef0c11-c99e-4c8a-adc8-b9ee149a0660" (UID: "0cef0c11-c99e-4c8a-adc8-b9ee149a0660"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.658408 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.658528 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.658538 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.658548 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-248fp\" (UniqueName: \"kubernetes.io/projected/0cef0c11-c99e-4c8a-adc8-b9ee149a0660-kube-api-access-248fp\") on node \"crc\" DevicePath \"\"" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.973160 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-k655t" event={"ID":"0cef0c11-c99e-4c8a-adc8-b9ee149a0660","Type":"ContainerDied","Data":"a466d305623aea1e31e0e140b88e30dcd99bf691fe505015b0084d655887a265"} Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.973205 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a466d305623aea1e31e0e140b88e30dcd99bf691fe505015b0084d655887a265" Jan 21 16:36:58 crc kubenswrapper[4834]: I0121 16:36:58.973240 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-k655t" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.058376 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-mcc9m"] Jan 21 16:36:59 crc kubenswrapper[4834]: E0121 16:36:59.058834 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cef0c11-c99e-4c8a-adc8-b9ee149a0660" containerName="install-os-openstack-openstack-cell1" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.058852 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cef0c11-c99e-4c8a-adc8-b9ee149a0660" containerName="install-os-openstack-openstack-cell1" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.059070 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cef0c11-c99e-4c8a-adc8-b9ee149a0660" containerName="install-os-openstack-openstack-cell1" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.059814 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.061971 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.062311 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.062323 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.077287 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.085967 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-mcc9m"] Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.168790 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.169215 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbxm\" (UniqueName: \"kubernetes.io/projected/ab3376df-6bc1-4e54-ac05-28d2db9d2486-kube-api-access-9vbxm\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.169265 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ceph\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.169455 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-inventory\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.271361 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbxm\" (UniqueName: \"kubernetes.io/projected/ab3376df-6bc1-4e54-ac05-28d2db9d2486-kube-api-access-9vbxm\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.271440 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ceph\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.271519 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-inventory\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.271621 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.276317 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-inventory\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.276378 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.276445 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ceph\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.288558 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbxm\" (UniqueName: \"kubernetes.io/projected/ab3376df-6bc1-4e54-ac05-28d2db9d2486-kube-api-access-9vbxm\") pod \"configure-os-openstack-openstack-cell1-mcc9m\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.389754 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.905356 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-mcc9m"] Jan 21 16:36:59 crc kubenswrapper[4834]: I0121 16:36:59.985169 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" event={"ID":"ab3376df-6bc1-4e54-ac05-28d2db9d2486","Type":"ContainerStarted","Data":"5a0f8bd1c4fc991465cb94c6dac2d44f543b07e88c1b9af9bf72cd4ffa179bed"} Jan 21 16:37:02 crc kubenswrapper[4834]: I0121 16:37:02.009299 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" event={"ID":"ab3376df-6bc1-4e54-ac05-28d2db9d2486","Type":"ContainerStarted","Data":"a2e42773de9a65193f3291db3a5d7ded0fc18d7778d0bba037d3d9ddd9f1504d"} Jan 21 16:37:02 crc kubenswrapper[4834]: I0121 16:37:02.033046 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" podStartSLOduration=2.253522364 podStartE2EDuration="3.033019978s" podCreationTimestamp="2026-01-21 16:36:59 +0000 UTC" firstStartedPulling="2026-01-21 16:36:59.914468077 +0000 UTC m=+7565.888817132" lastFinishedPulling="2026-01-21 16:37:00.693965701 +0000 UTC m=+7566.668314746" observedRunningTime="2026-01-21 16:37:02.027537556 +0000 UTC m=+7568.001886621" watchObservedRunningTime="2026-01-21 16:37:02.033019978 +0000 UTC m=+7568.007369023" Jan 21 16:37:14 crc kubenswrapper[4834]: I0121 16:37:14.841999 4834 scope.go:117] "RemoveContainer" containerID="79ba202618bf28e59bd417398885cff1d593b44537a07314428ee19ff2118b5a" Jan 21 16:37:14 crc kubenswrapper[4834]: I0121 16:37:14.865858 4834 scope.go:117] "RemoveContainer" containerID="dde68d08c76fe4be8702a635f67bc5ac90aff6a5be70550cfd29c3768f4bc4e3" Jan 21 16:37:14 crc kubenswrapper[4834]: I0121 16:37:14.903447 4834 scope.go:117] "RemoveContainer" containerID="d824c1e27e655776189b02fed95e301fda107e370cea153f99532a3bcc39fd4c" Jan 21 16:37:17 crc kubenswrapper[4834]: I0121 16:37:17.113920 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:37:17 crc kubenswrapper[4834]: I0121 16:37:17.114525 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:37:17 crc kubenswrapper[4834]: I0121 16:37:17.114573 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:37:17 crc kubenswrapper[4834]: I0121 16:37:17.115493 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b51f50c4e7c6b4b1c2488cbfa3e9765f471f15d32a3eb55c08dff4d01edf5b40"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:37:17 crc kubenswrapper[4834]: I0121 16:37:17.115549 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://b51f50c4e7c6b4b1c2488cbfa3e9765f471f15d32a3eb55c08dff4d01edf5b40" gracePeriod=600 Jan 21 16:37:18 crc kubenswrapper[4834]: I0121 16:37:18.206963 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="b51f50c4e7c6b4b1c2488cbfa3e9765f471f15d32a3eb55c08dff4d01edf5b40" exitCode=0 Jan 21 16:37:18 crc kubenswrapper[4834]: I0121 16:37:18.207130 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"b51f50c4e7c6b4b1c2488cbfa3e9765f471f15d32a3eb55c08dff4d01edf5b40"} Jan 21 16:37:18 crc kubenswrapper[4834]: I0121 16:37:18.207690 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd"} Jan 21 16:37:18 crc kubenswrapper[4834]: I0121 16:37:18.207723 4834 scope.go:117] "RemoveContainer" containerID="954c89c825d293415d07e72f733c001278e0d58ef622d5a96c2570122e778093" Jan 21 16:37:45 crc kubenswrapper[4834]: I0121 16:37:45.509728 4834 generic.go:334] "Generic (PLEG): container finished" podID="ab3376df-6bc1-4e54-ac05-28d2db9d2486" containerID="a2e42773de9a65193f3291db3a5d7ded0fc18d7778d0bba037d3d9ddd9f1504d" exitCode=2 Jan 21 16:37:45 crc kubenswrapper[4834]: I0121 16:37:45.509830 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" event={"ID":"ab3376df-6bc1-4e54-ac05-28d2db9d2486","Type":"ContainerDied","Data":"a2e42773de9a65193f3291db3a5d7ded0fc18d7778d0bba037d3d9ddd9f1504d"} Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.117250 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.203388 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ceph\") pod \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.203479 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vbxm\" (UniqueName: \"kubernetes.io/projected/ab3376df-6bc1-4e54-ac05-28d2db9d2486-kube-api-access-9vbxm\") pod \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.203533 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ssh-key-openstack-cell1\") pod \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.203608 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-inventory\") pod \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\" (UID: \"ab3376df-6bc1-4e54-ac05-28d2db9d2486\") " Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.210364 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ceph" (OuterVolumeSpecName: "ceph") pod "ab3376df-6bc1-4e54-ac05-28d2db9d2486" (UID: "ab3376df-6bc1-4e54-ac05-28d2db9d2486"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.210598 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3376df-6bc1-4e54-ac05-28d2db9d2486-kube-api-access-9vbxm" (OuterVolumeSpecName: "kube-api-access-9vbxm") pod "ab3376df-6bc1-4e54-ac05-28d2db9d2486" (UID: "ab3376df-6bc1-4e54-ac05-28d2db9d2486"). InnerVolumeSpecName "kube-api-access-9vbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.235753 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-inventory" (OuterVolumeSpecName: "inventory") pod "ab3376df-6bc1-4e54-ac05-28d2db9d2486" (UID: "ab3376df-6bc1-4e54-ac05-28d2db9d2486"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.239518 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ab3376df-6bc1-4e54-ac05-28d2db9d2486" (UID: "ab3376df-6bc1-4e54-ac05-28d2db9d2486"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.306141 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.306191 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vbxm\" (UniqueName: \"kubernetes.io/projected/ab3376df-6bc1-4e54-ac05-28d2db9d2486-kube-api-access-9vbxm\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.306204 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.306212 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab3376df-6bc1-4e54-ac05-28d2db9d2486-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.541453 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" event={"ID":"ab3376df-6bc1-4e54-ac05-28d2db9d2486","Type":"ContainerDied","Data":"5a0f8bd1c4fc991465cb94c6dac2d44f543b07e88c1b9af9bf72cd4ffa179bed"} Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.542394 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0f8bd1c4fc991465cb94c6dac2d44f543b07e88c1b9af9bf72cd4ffa179bed" Jan 21 16:37:47 crc kubenswrapper[4834]: I0121 16:37:47.541665 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-mcc9m" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.034586 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-fntcc"] Jan 21 16:37:54 crc kubenswrapper[4834]: E0121 16:37:54.035790 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3376df-6bc1-4e54-ac05-28d2db9d2486" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.035813 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3376df-6bc1-4e54-ac05-28d2db9d2486" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.036118 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3376df-6bc1-4e54-ac05-28d2db9d2486" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.037096 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.039398 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.040132 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.040414 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.042919 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.051782 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-fntcc"] Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.080910 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ceph\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.081077 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmnrm\" (UniqueName: \"kubernetes.io/projected/c3577c85-fa53-4981-a641-9ca95c0fa877-kube-api-access-lmnrm\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.081113 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.081155 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-inventory\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.183324 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ceph\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.183499 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmnrm\" (UniqueName: \"kubernetes.io/projected/c3577c85-fa53-4981-a641-9ca95c0fa877-kube-api-access-lmnrm\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.183536 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.183577 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-inventory\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.192817 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-inventory\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.192979 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ceph\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.193175 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.203306 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmnrm\" (UniqueName: \"kubernetes.io/projected/c3577c85-fa53-4981-a641-9ca95c0fa877-kube-api-access-lmnrm\") pod \"configure-os-openstack-openstack-cell1-fntcc\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.371017 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.379903 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:37:54 crc kubenswrapper[4834]: I0121 16:37:54.935781 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-fntcc"] Jan 21 16:37:55 crc kubenswrapper[4834]: I0121 16:37:55.426886 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:37:55 crc kubenswrapper[4834]: I0121 16:37:55.614016 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-fntcc" event={"ID":"c3577c85-fa53-4981-a641-9ca95c0fa877","Type":"ContainerStarted","Data":"4539f6391e0a4126244ca49faa83dc1bbc77562bc475b7043b9f57e002ff708c"} Jan 21 16:37:56 crc kubenswrapper[4834]: I0121 16:37:56.627178 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-fntcc" event={"ID":"c3577c85-fa53-4981-a641-9ca95c0fa877","Type":"ContainerStarted","Data":"c15dd103a313dcdc91aefb297e12d8612caf8e521881a86ef88ffb42f9e469a4"} Jan 21 16:37:56 crc kubenswrapper[4834]: I0121 16:37:56.654618 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-fntcc" podStartSLOduration=2.176076263 podStartE2EDuration="2.654588693s" podCreationTimestamp="2026-01-21 16:37:54 +0000 UTC" firstStartedPulling="2026-01-21 16:37:54.945045247 +0000 UTC m=+7620.919394302" lastFinishedPulling="2026-01-21 16:37:55.423557687 +0000 UTC m=+7621.397906732" observedRunningTime="2026-01-21 16:37:56.644030721 +0000 UTC m=+7622.618379776" watchObservedRunningTime="2026-01-21 16:37:56.654588693 +0000 UTC m=+7622.628937748" Jan 21 16:38:36 crc kubenswrapper[4834]: I0121 16:38:36.064087 4834 generic.go:334] "Generic (PLEG): container finished" podID="c3577c85-fa53-4981-a641-9ca95c0fa877" containerID="c15dd103a313dcdc91aefb297e12d8612caf8e521881a86ef88ffb42f9e469a4" exitCode=2 Jan 21 16:38:36 crc kubenswrapper[4834]: I0121 16:38:36.064188 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-fntcc" event={"ID":"c3577c85-fa53-4981-a641-9ca95c0fa877","Type":"ContainerDied","Data":"c15dd103a313dcdc91aefb297e12d8612caf8e521881a86ef88ffb42f9e469a4"} Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.618161 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.803897 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ssh-key-openstack-cell1\") pod \"c3577c85-fa53-4981-a641-9ca95c0fa877\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.803977 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ceph\") pod \"c3577c85-fa53-4981-a641-9ca95c0fa877\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.804036 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmnrm\" (UniqueName: \"kubernetes.io/projected/c3577c85-fa53-4981-a641-9ca95c0fa877-kube-api-access-lmnrm\") pod \"c3577c85-fa53-4981-a641-9ca95c0fa877\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.804254 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-inventory\") pod \"c3577c85-fa53-4981-a641-9ca95c0fa877\" (UID: \"c3577c85-fa53-4981-a641-9ca95c0fa877\") " Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.810732 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ceph" (OuterVolumeSpecName: "ceph") pod "c3577c85-fa53-4981-a641-9ca95c0fa877" (UID: "c3577c85-fa53-4981-a641-9ca95c0fa877"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.816993 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3577c85-fa53-4981-a641-9ca95c0fa877-kube-api-access-lmnrm" (OuterVolumeSpecName: "kube-api-access-lmnrm") pod "c3577c85-fa53-4981-a641-9ca95c0fa877" (UID: "c3577c85-fa53-4981-a641-9ca95c0fa877"). InnerVolumeSpecName "kube-api-access-lmnrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.839044 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "c3577c85-fa53-4981-a641-9ca95c0fa877" (UID: "c3577c85-fa53-4981-a641-9ca95c0fa877"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.841246 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-inventory" (OuterVolumeSpecName: "inventory") pod "c3577c85-fa53-4981-a641-9ca95c0fa877" (UID: "c3577c85-fa53-4981-a641-9ca95c0fa877"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.908515 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.908578 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.908594 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3577c85-fa53-4981-a641-9ca95c0fa877-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:37 crc kubenswrapper[4834]: I0121 16:38:37.908611 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmnrm\" (UniqueName: \"kubernetes.io/projected/c3577c85-fa53-4981-a641-9ca95c0fa877-kube-api-access-lmnrm\") on node \"crc\" DevicePath \"\"" Jan 21 16:38:38 crc kubenswrapper[4834]: I0121 16:38:38.085200 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-fntcc" event={"ID":"c3577c85-fa53-4981-a641-9ca95c0fa877","Type":"ContainerDied","Data":"4539f6391e0a4126244ca49faa83dc1bbc77562bc475b7043b9f57e002ff708c"} Jan 21 16:38:38 crc kubenswrapper[4834]: I0121 16:38:38.085246 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-fntcc" Jan 21 16:38:38 crc kubenswrapper[4834]: I0121 16:38:38.085253 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4539f6391e0a4126244ca49faa83dc1bbc77562bc475b7043b9f57e002ff708c" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.035725 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-s6bqt"] Jan 21 16:38:55 crc kubenswrapper[4834]: E0121 16:38:55.037071 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3577c85-fa53-4981-a641-9ca95c0fa877" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.037111 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3577c85-fa53-4981-a641-9ca95c0fa877" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.038063 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3577c85-fa53-4981-a641-9ca95c0fa877" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.039394 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.046401 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.046847 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-s6bqt"] Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.046848 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.047014 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.047538 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.110743 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.111085 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-inventory\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.111141 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v56fn\" (UniqueName: \"kubernetes.io/projected/83e06af3-bd7e-412e-b471-e60df89514df-kube-api-access-v56fn\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.111176 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ceph\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.213212 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.213271 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-inventory\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.213315 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v56fn\" (UniqueName: \"kubernetes.io/projected/83e06af3-bd7e-412e-b471-e60df89514df-kube-api-access-v56fn\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.213353 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ceph\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.219571 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.222718 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ceph\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.228782 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-inventory\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.244533 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v56fn\" (UniqueName: \"kubernetes.io/projected/83e06af3-bd7e-412e-b471-e60df89514df-kube-api-access-v56fn\") pod \"configure-os-openstack-openstack-cell1-s6bqt\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.375111 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:38:55 crc kubenswrapper[4834]: I0121 16:38:55.980857 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-s6bqt"] Jan 21 16:38:56 crc kubenswrapper[4834]: I0121 16:38:56.284469 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" event={"ID":"83e06af3-bd7e-412e-b471-e60df89514df","Type":"ContainerStarted","Data":"847ea221ee8005543c0d60c8cdfa895f54bc9f6f97df7f682fd3d5752a85d830"} Jan 21 16:38:57 crc kubenswrapper[4834]: I0121 16:38:57.296716 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" event={"ID":"83e06af3-bd7e-412e-b471-e60df89514df","Type":"ContainerStarted","Data":"85fdd6063aa451be0760da7bb58ffb4363a91cefb78b29163648ca4279159dae"} Jan 21 16:38:57 crc kubenswrapper[4834]: I0121 16:38:57.326593 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" podStartSLOduration=1.85693094 podStartE2EDuration="2.32655012s" podCreationTimestamp="2026-01-21 16:38:55 +0000 UTC" firstStartedPulling="2026-01-21 16:38:55.987885144 +0000 UTC m=+7681.962234189" lastFinishedPulling="2026-01-21 16:38:56.457504324 +0000 UTC m=+7682.431853369" observedRunningTime="2026-01-21 16:38:57.313565923 +0000 UTC m=+7683.287914978" watchObservedRunningTime="2026-01-21 16:38:57.32655012 +0000 UTC m=+7683.300899165" Jan 21 16:39:17 crc kubenswrapper[4834]: I0121 16:39:17.113936 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:39:17 crc kubenswrapper[4834]: I0121 16:39:17.115556 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:39:37 crc kubenswrapper[4834]: I0121 16:39:37.688079 4834 generic.go:334] "Generic (PLEG): container finished" podID="83e06af3-bd7e-412e-b471-e60df89514df" containerID="85fdd6063aa451be0760da7bb58ffb4363a91cefb78b29163648ca4279159dae" exitCode=2 Jan 21 16:39:37 crc kubenswrapper[4834]: I0121 16:39:37.688224 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" event={"ID":"83e06af3-bd7e-412e-b471-e60df89514df","Type":"ContainerDied","Data":"85fdd6063aa451be0760da7bb58ffb4363a91cefb78b29163648ca4279159dae"} Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.156495 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.357044 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ssh-key-openstack-cell1\") pod \"83e06af3-bd7e-412e-b471-e60df89514df\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.357744 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ceph\") pod \"83e06af3-bd7e-412e-b471-e60df89514df\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.357881 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-inventory\") pod \"83e06af3-bd7e-412e-b471-e60df89514df\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.357956 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v56fn\" (UniqueName: \"kubernetes.io/projected/83e06af3-bd7e-412e-b471-e60df89514df-kube-api-access-v56fn\") pod \"83e06af3-bd7e-412e-b471-e60df89514df\" (UID: \"83e06af3-bd7e-412e-b471-e60df89514df\") " Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.363010 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e06af3-bd7e-412e-b471-e60df89514df-kube-api-access-v56fn" (OuterVolumeSpecName: "kube-api-access-v56fn") pod "83e06af3-bd7e-412e-b471-e60df89514df" (UID: "83e06af3-bd7e-412e-b471-e60df89514df"). InnerVolumeSpecName "kube-api-access-v56fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.364118 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ceph" (OuterVolumeSpecName: "ceph") pod "83e06af3-bd7e-412e-b471-e60df89514df" (UID: "83e06af3-bd7e-412e-b471-e60df89514df"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.394058 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-inventory" (OuterVolumeSpecName: "inventory") pod "83e06af3-bd7e-412e-b471-e60df89514df" (UID: "83e06af3-bd7e-412e-b471-e60df89514df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.395040 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "83e06af3-bd7e-412e-b471-e60df89514df" (UID: "83e06af3-bd7e-412e-b471-e60df89514df"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.465355 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.465667 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.465777 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v56fn\" (UniqueName: \"kubernetes.io/projected/83e06af3-bd7e-412e-b471-e60df89514df-kube-api-access-v56fn\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.465898 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/83e06af3-bd7e-412e-b471-e60df89514df-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.706331 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" event={"ID":"83e06af3-bd7e-412e-b471-e60df89514df","Type":"ContainerDied","Data":"847ea221ee8005543c0d60c8cdfa895f54bc9f6f97df7f682fd3d5752a85d830"} Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.706389 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="847ea221ee8005543c0d60c8cdfa895f54bc9f6f97df7f682fd3d5752a85d830" Jan 21 16:39:39 crc kubenswrapper[4834]: I0121 16:39:39.706389 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-s6bqt" Jan 21 16:39:47 crc kubenswrapper[4834]: I0121 16:39:47.114229 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:39:47 crc kubenswrapper[4834]: I0121 16:39:47.114947 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.034232 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-fxvzg"] Jan 21 16:40:17 crc kubenswrapper[4834]: E0121 16:40:17.035384 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e06af3-bd7e-412e-b471-e60df89514df" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.035403 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e06af3-bd7e-412e-b471-e60df89514df" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.035687 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e06af3-bd7e-412e-b471-e60df89514df" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.037017 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.039292 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.039357 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-fm2xq" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.039489 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.040910 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.045668 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-fxvzg"] Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.111954 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.112043 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-inventory\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.112079 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl5tl\" (UniqueName: \"kubernetes.io/projected/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-kube-api-access-hl5tl\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.112453 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ceph\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.113383 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.113434 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.113477 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.114152 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.114217 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" gracePeriod=600 Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.215030 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ceph\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.215291 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.215399 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-inventory\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.215428 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl5tl\" (UniqueName: \"kubernetes.io/projected/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-kube-api-access-hl5tl\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.222666 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.222836 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ceph\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.223045 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-inventory\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.235266 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl5tl\" (UniqueName: \"kubernetes.io/projected/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-kube-api-access-hl5tl\") pod \"configure-os-openstack-openstack-cell1-fxvzg\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: E0121 16:40:17.237510 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.362292 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.912308 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-fxvzg"] Jan 21 16:40:17 crc kubenswrapper[4834]: W0121 16:40:17.912689 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa34a899_92ec_4e6e_98e2_090a64b4a8c9.slice/crio-da9f5990f18567b72b72b314e5afad114e0f926906af9d0f616c0a7d48a3019b WatchSource:0}: Error finding container da9f5990f18567b72b72b314e5afad114e0f926906af9d0f616c0a7d48a3019b: Status 404 returned error can't find the container with id da9f5990f18567b72b72b314e5afad114e0f926906af9d0f616c0a7d48a3019b Jan 21 16:40:17 crc kubenswrapper[4834]: I0121 16:40:17.915615 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:40:18 crc kubenswrapper[4834]: I0121 16:40:18.105963 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" event={"ID":"aa34a899-92ec-4e6e-98e2-090a64b4a8c9","Type":"ContainerStarted","Data":"da9f5990f18567b72b72b314e5afad114e0f926906af9d0f616c0a7d48a3019b"} Jan 21 16:40:18 crc kubenswrapper[4834]: I0121 16:40:18.108250 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" exitCode=0 Jan 21 16:40:18 crc kubenswrapper[4834]: I0121 16:40:18.108287 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd"} Jan 21 16:40:18 crc kubenswrapper[4834]: I0121 16:40:18.108317 4834 scope.go:117] "RemoveContainer" containerID="b51f50c4e7c6b4b1c2488cbfa3e9765f471f15d32a3eb55c08dff4d01edf5b40" Jan 21 16:40:18 crc kubenswrapper[4834]: I0121 16:40:18.109175 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:40:18 crc kubenswrapper[4834]: E0121 16:40:18.109579 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:40:19 crc kubenswrapper[4834]: I0121 16:40:19.124423 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" event={"ID":"aa34a899-92ec-4e6e-98e2-090a64b4a8c9","Type":"ContainerStarted","Data":"ced29c002522ff840feb7fd6817248358990885e71d707a669202467185a5e93"} Jan 21 16:40:19 crc kubenswrapper[4834]: I0121 16:40:19.144876 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" podStartSLOduration=1.7471729520000001 podStartE2EDuration="2.144850453s" podCreationTimestamp="2026-01-21 16:40:17 +0000 UTC" firstStartedPulling="2026-01-21 16:40:17.915360495 +0000 UTC m=+7763.889709540" lastFinishedPulling="2026-01-21 16:40:18.313037996 +0000 UTC m=+7764.287387041" observedRunningTime="2026-01-21 16:40:19.143898693 +0000 UTC m=+7765.118247768" watchObservedRunningTime="2026-01-21 16:40:19.144850453 +0000 UTC m=+7765.119199508" Jan 21 16:40:29 crc kubenswrapper[4834]: I0121 16:40:29.325068 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:40:29 crc kubenswrapper[4834]: E0121 16:40:29.326037 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:40:43 crc kubenswrapper[4834]: I0121 16:40:43.325821 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:40:43 crc kubenswrapper[4834]: E0121 16:40:43.326733 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:40:46 crc kubenswrapper[4834]: I0121 16:40:46.780830 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rmlpg"] Jan 21 16:40:46 crc kubenswrapper[4834]: I0121 16:40:46.792546 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:46 crc kubenswrapper[4834]: I0121 16:40:46.818626 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmlpg"] Jan 21 16:40:46 crc kubenswrapper[4834]: I0121 16:40:46.908336 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-utilities\") pod \"community-operators-rmlpg\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:46 crc kubenswrapper[4834]: I0121 16:40:46.908387 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-catalog-content\") pod \"community-operators-rmlpg\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:46 crc kubenswrapper[4834]: I0121 16:40:46.908412 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9dp\" (UniqueName: \"kubernetes.io/projected/7cd507f4-57da-4f84-8f9c-a64f30468ace-kube-api-access-nw9dp\") pod \"community-operators-rmlpg\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:47 crc kubenswrapper[4834]: I0121 16:40:47.010520 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-utilities\") pod \"community-operators-rmlpg\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:47 crc kubenswrapper[4834]: I0121 16:40:47.010612 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-catalog-content\") pod \"community-operators-rmlpg\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:47 crc kubenswrapper[4834]: I0121 16:40:47.010646 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9dp\" (UniqueName: \"kubernetes.io/projected/7cd507f4-57da-4f84-8f9c-a64f30468ace-kube-api-access-nw9dp\") pod \"community-operators-rmlpg\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:47 crc kubenswrapper[4834]: I0121 16:40:47.011330 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-utilities\") pod \"community-operators-rmlpg\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:47 crc kubenswrapper[4834]: I0121 16:40:47.011391 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-catalog-content\") pod \"community-operators-rmlpg\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:47 crc kubenswrapper[4834]: I0121 16:40:47.031308 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9dp\" (UniqueName: \"kubernetes.io/projected/7cd507f4-57da-4f84-8f9c-a64f30468ace-kube-api-access-nw9dp\") pod \"community-operators-rmlpg\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:47 crc kubenswrapper[4834]: I0121 16:40:47.131311 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:47 crc kubenswrapper[4834]: I0121 16:40:47.774290 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmlpg"] Jan 21 16:40:48 crc kubenswrapper[4834]: I0121 16:40:48.416753 4834 generic.go:334] "Generic (PLEG): container finished" podID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerID="56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2" exitCode=0 Jan 21 16:40:48 crc kubenswrapper[4834]: I0121 16:40:48.416814 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmlpg" event={"ID":"7cd507f4-57da-4f84-8f9c-a64f30468ace","Type":"ContainerDied","Data":"56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2"} Jan 21 16:40:48 crc kubenswrapper[4834]: I0121 16:40:48.417547 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmlpg" event={"ID":"7cd507f4-57da-4f84-8f9c-a64f30468ace","Type":"ContainerStarted","Data":"f1be702890df1248c24cd2c33d97c33d23ccd4b7478b4ebc6f7e5ebfd772b6a2"} Jan 21 16:40:49 crc kubenswrapper[4834]: I0121 16:40:49.440490 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmlpg" event={"ID":"7cd507f4-57da-4f84-8f9c-a64f30468ace","Type":"ContainerStarted","Data":"0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104"} Jan 21 16:40:50 crc kubenswrapper[4834]: I0121 16:40:50.454966 4834 generic.go:334] "Generic (PLEG): container finished" podID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerID="0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104" exitCode=0 Jan 21 16:40:50 crc kubenswrapper[4834]: I0121 16:40:50.455325 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmlpg" event={"ID":"7cd507f4-57da-4f84-8f9c-a64f30468ace","Type":"ContainerDied","Data":"0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104"} Jan 21 16:40:51 crc kubenswrapper[4834]: I0121 16:40:51.466751 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmlpg" event={"ID":"7cd507f4-57da-4f84-8f9c-a64f30468ace","Type":"ContainerStarted","Data":"2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740"} Jan 21 16:40:51 crc kubenswrapper[4834]: I0121 16:40:51.493541 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rmlpg" podStartSLOduration=2.823144055 podStartE2EDuration="5.493515359s" podCreationTimestamp="2026-01-21 16:40:46 +0000 UTC" firstStartedPulling="2026-01-21 16:40:48.418669311 +0000 UTC m=+7794.393018346" lastFinishedPulling="2026-01-21 16:40:51.089040605 +0000 UTC m=+7797.063389650" observedRunningTime="2026-01-21 16:40:51.485974862 +0000 UTC m=+7797.460323927" watchObservedRunningTime="2026-01-21 16:40:51.493515359 +0000 UTC m=+7797.467864404" Jan 21 16:40:54 crc kubenswrapper[4834]: I0121 16:40:54.333160 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:40:54 crc kubenswrapper[4834]: E0121 16:40:54.334839 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:40:57 crc kubenswrapper[4834]: I0121 16:40:57.131979 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:57 crc kubenswrapper[4834]: I0121 16:40:57.132514 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:57 crc kubenswrapper[4834]: I0121 16:40:57.179444 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:57 crc kubenswrapper[4834]: I0121 16:40:57.525084 4834 generic.go:334] "Generic (PLEG): container finished" podID="aa34a899-92ec-4e6e-98e2-090a64b4a8c9" containerID="ced29c002522ff840feb7fd6817248358990885e71d707a669202467185a5e93" exitCode=2 Jan 21 16:40:57 crc kubenswrapper[4834]: I0121 16:40:57.526379 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" event={"ID":"aa34a899-92ec-4e6e-98e2-090a64b4a8c9","Type":"ContainerDied","Data":"ced29c002522ff840feb7fd6817248358990885e71d707a669202467185a5e93"} Jan 21 16:40:57 crc kubenswrapper[4834]: I0121 16:40:57.617466 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:40:57 crc kubenswrapper[4834]: I0121 16:40:57.688980 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmlpg"] Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.000759 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.079393 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-inventory\") pod \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.079525 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ssh-key-openstack-cell1\") pod \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.079760 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl5tl\" (UniqueName: \"kubernetes.io/projected/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-kube-api-access-hl5tl\") pod \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.079818 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ceph\") pod \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\" (UID: \"aa34a899-92ec-4e6e-98e2-090a64b4a8c9\") " Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.085336 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-kube-api-access-hl5tl" (OuterVolumeSpecName: "kube-api-access-hl5tl") pod "aa34a899-92ec-4e6e-98e2-090a64b4a8c9" (UID: "aa34a899-92ec-4e6e-98e2-090a64b4a8c9"). InnerVolumeSpecName "kube-api-access-hl5tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.094247 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ceph" (OuterVolumeSpecName: "ceph") pod "aa34a899-92ec-4e6e-98e2-090a64b4a8c9" (UID: "aa34a899-92ec-4e6e-98e2-090a64b4a8c9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.110445 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "aa34a899-92ec-4e6e-98e2-090a64b4a8c9" (UID: "aa34a899-92ec-4e6e-98e2-090a64b4a8c9"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.119768 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-inventory" (OuterVolumeSpecName: "inventory") pod "aa34a899-92ec-4e6e-98e2-090a64b4a8c9" (UID: "aa34a899-92ec-4e6e-98e2-090a64b4a8c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.181962 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl5tl\" (UniqueName: \"kubernetes.io/projected/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-kube-api-access-hl5tl\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.182016 4834 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ceph\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.182028 4834 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.182036 4834 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/aa34a899-92ec-4e6e-98e2-090a64b4a8c9-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.549023 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.548992 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-fxvzg" event={"ID":"aa34a899-92ec-4e6e-98e2-090a64b4a8c9","Type":"ContainerDied","Data":"da9f5990f18567b72b72b314e5afad114e0f926906af9d0f616c0a7d48a3019b"} Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.549099 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9f5990f18567b72b72b314e5afad114e0f926906af9d0f616c0a7d48a3019b" Jan 21 16:40:59 crc kubenswrapper[4834]: I0121 16:40:59.549261 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rmlpg" podUID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerName="registry-server" containerID="cri-o://2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740" gracePeriod=2 Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.018379 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.109431 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-utilities\") pod \"7cd507f4-57da-4f84-8f9c-a64f30468ace\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.110033 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-catalog-content\") pod \"7cd507f4-57da-4f84-8f9c-a64f30468ace\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.110059 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw9dp\" (UniqueName: \"kubernetes.io/projected/7cd507f4-57da-4f84-8f9c-a64f30468ace-kube-api-access-nw9dp\") pod \"7cd507f4-57da-4f84-8f9c-a64f30468ace\" (UID: \"7cd507f4-57da-4f84-8f9c-a64f30468ace\") " Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.114141 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-utilities" (OuterVolumeSpecName: "utilities") pod "7cd507f4-57da-4f84-8f9c-a64f30468ace" (UID: "7cd507f4-57da-4f84-8f9c-a64f30468ace"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.116456 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd507f4-57da-4f84-8f9c-a64f30468ace-kube-api-access-nw9dp" (OuterVolumeSpecName: "kube-api-access-nw9dp") pod "7cd507f4-57da-4f84-8f9c-a64f30468ace" (UID: "7cd507f4-57da-4f84-8f9c-a64f30468ace"). InnerVolumeSpecName "kube-api-access-nw9dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.186707 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cd507f4-57da-4f84-8f9c-a64f30468ace" (UID: "7cd507f4-57da-4f84-8f9c-a64f30468ace"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.212465 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.212506 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cd507f4-57da-4f84-8f9c-a64f30468ace-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.212522 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw9dp\" (UniqueName: \"kubernetes.io/projected/7cd507f4-57da-4f84-8f9c-a64f30468ace-kube-api-access-nw9dp\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.560883 4834 generic.go:334] "Generic (PLEG): container finished" podID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerID="2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740" exitCode=0 Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.560955 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmlpg" event={"ID":"7cd507f4-57da-4f84-8f9c-a64f30468ace","Type":"ContainerDied","Data":"2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740"} Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.560984 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmlpg" event={"ID":"7cd507f4-57da-4f84-8f9c-a64f30468ace","Type":"ContainerDied","Data":"f1be702890df1248c24cd2c33d97c33d23ccd4b7478b4ebc6f7e5ebfd772b6a2"} Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.561002 4834 scope.go:117] "RemoveContainer" containerID="2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.561119 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmlpg" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.596137 4834 scope.go:117] "RemoveContainer" containerID="0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.603353 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmlpg"] Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.616677 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rmlpg"] Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.629223 4834 scope.go:117] "RemoveContainer" containerID="56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.678373 4834 scope.go:117] "RemoveContainer" containerID="2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740" Jan 21 16:41:00 crc kubenswrapper[4834]: E0121 16:41:00.679267 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740\": container with ID starting with 2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740 not found: ID does not exist" containerID="2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.679311 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740"} err="failed to get container status \"2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740\": rpc error: code = NotFound desc = could not find container \"2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740\": container with ID starting with 2681d14f6b37fc83c871a12b390037d893fc4935856b3319a126ff160a244740 not found: ID does not exist" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.679345 4834 scope.go:117] "RemoveContainer" containerID="0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104" Jan 21 16:41:00 crc kubenswrapper[4834]: E0121 16:41:00.679795 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104\": container with ID starting with 0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104 not found: ID does not exist" containerID="0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.679839 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104"} err="failed to get container status \"0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104\": rpc error: code = NotFound desc = could not find container \"0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104\": container with ID starting with 0da2c052818761ef93190e0224dafbdbf6aff3fcc12ec0491246da900fb45104 not found: ID does not exist" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.679870 4834 scope.go:117] "RemoveContainer" containerID="56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2" Jan 21 16:41:00 crc kubenswrapper[4834]: E0121 16:41:00.680296 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2\": container with ID starting with 56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2 not found: ID does not exist" containerID="56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2" Jan 21 16:41:00 crc kubenswrapper[4834]: I0121 16:41:00.680339 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2"} err="failed to get container status \"56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2\": rpc error: code = NotFound desc = could not find container \"56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2\": container with ID starting with 56030776e0225fc7fff57af558c14fec38c041b083633cff6527d7fc84c430c2 not found: ID does not exist" Jan 21 16:41:02 crc kubenswrapper[4834]: I0121 16:41:02.346217 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cd507f4-57da-4f84-8f9c-a64f30468ace" path="/var/lib/kubelet/pods/7cd507f4-57da-4f84-8f9c-a64f30468ace/volumes" Jan 21 16:41:05 crc kubenswrapper[4834]: I0121 16:41:05.324973 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:41:05 crc kubenswrapper[4834]: E0121 16:41:05.325861 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:41:16 crc kubenswrapper[4834]: I0121 16:41:16.324991 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:41:16 crc kubenswrapper[4834]: E0121 16:41:16.325844 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:41:29 crc kubenswrapper[4834]: I0121 16:41:29.324399 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:41:29 crc kubenswrapper[4834]: E0121 16:41:29.326958 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.499011 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zr98j"] Jan 21 16:41:33 crc kubenswrapper[4834]: E0121 16:41:33.500122 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerName="extract-content" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.500136 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerName="extract-content" Jan 21 16:41:33 crc kubenswrapper[4834]: E0121 16:41:33.500170 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa34a899-92ec-4e6e-98e2-090a64b4a8c9" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.500176 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa34a899-92ec-4e6e-98e2-090a64b4a8c9" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:41:33 crc kubenswrapper[4834]: E0121 16:41:33.500195 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerName="extract-utilities" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.500202 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerName="extract-utilities" Jan 21 16:41:33 crc kubenswrapper[4834]: E0121 16:41:33.500223 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerName="registry-server" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.500229 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerName="registry-server" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.500421 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd507f4-57da-4f84-8f9c-a64f30468ace" containerName="registry-server" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.500455 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa34a899-92ec-4e6e-98e2-090a64b4a8c9" containerName="configure-os-openstack-openstack-cell1" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.502096 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.511085 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zr98j"] Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.573013 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dql79\" (UniqueName: \"kubernetes.io/projected/5b16d0ac-2657-4331-81a5-ed311c6db6a8-kube-api-access-dql79\") pod \"redhat-operators-zr98j\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.573166 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-catalog-content\") pod \"redhat-operators-zr98j\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.573233 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-utilities\") pod \"redhat-operators-zr98j\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.676689 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-catalog-content\") pod \"redhat-operators-zr98j\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.677529 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-utilities\") pod \"redhat-operators-zr98j\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.678029 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dql79\" (UniqueName: \"kubernetes.io/projected/5b16d0ac-2657-4331-81a5-ed311c6db6a8-kube-api-access-dql79\") pod \"redhat-operators-zr98j\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.677837 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-utilities\") pod \"redhat-operators-zr98j\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.677399 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-catalog-content\") pod \"redhat-operators-zr98j\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.699952 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dql79\" (UniqueName: \"kubernetes.io/projected/5b16d0ac-2657-4331-81a5-ed311c6db6a8-kube-api-access-dql79\") pod \"redhat-operators-zr98j\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:33 crc kubenswrapper[4834]: I0121 16:41:33.823519 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:34 crc kubenswrapper[4834]: I0121 16:41:34.419300 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zr98j"] Jan 21 16:41:34 crc kubenswrapper[4834]: I0121 16:41:34.904571 4834 generic.go:334] "Generic (PLEG): container finished" podID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerID="4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f" exitCode=0 Jan 21 16:41:34 crc kubenswrapper[4834]: I0121 16:41:34.904636 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr98j" event={"ID":"5b16d0ac-2657-4331-81a5-ed311c6db6a8","Type":"ContainerDied","Data":"4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f"} Jan 21 16:41:34 crc kubenswrapper[4834]: I0121 16:41:34.904941 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr98j" event={"ID":"5b16d0ac-2657-4331-81a5-ed311c6db6a8","Type":"ContainerStarted","Data":"e916a3aca2d8aab40a5f563cdc72ae8088fdb25b00378af0ff2f53cdd8996681"} Jan 21 16:41:36 crc kubenswrapper[4834]: I0121 16:41:36.924649 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr98j" event={"ID":"5b16d0ac-2657-4331-81a5-ed311c6db6a8","Type":"ContainerStarted","Data":"e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec"} Jan 21 16:41:39 crc kubenswrapper[4834]: I0121 16:41:39.955367 4834 generic.go:334] "Generic (PLEG): container finished" podID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerID="e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec" exitCode=0 Jan 21 16:41:39 crc kubenswrapper[4834]: I0121 16:41:39.955577 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr98j" event={"ID":"5b16d0ac-2657-4331-81a5-ed311c6db6a8","Type":"ContainerDied","Data":"e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec"} Jan 21 16:41:41 crc kubenswrapper[4834]: I0121 16:41:41.979407 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr98j" event={"ID":"5b16d0ac-2657-4331-81a5-ed311c6db6a8","Type":"ContainerStarted","Data":"f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7"} Jan 21 16:41:42 crc kubenswrapper[4834]: I0121 16:41:42.005284 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zr98j" podStartSLOduration=2.824998432 podStartE2EDuration="9.005263388s" podCreationTimestamp="2026-01-21 16:41:33 +0000 UTC" firstStartedPulling="2026-01-21 16:41:34.907604718 +0000 UTC m=+7840.881953763" lastFinishedPulling="2026-01-21 16:41:41.087869684 +0000 UTC m=+7847.062218719" observedRunningTime="2026-01-21 16:41:42.00278425 +0000 UTC m=+7847.977133315" watchObservedRunningTime="2026-01-21 16:41:42.005263388 +0000 UTC m=+7847.979612433" Jan 21 16:41:43 crc kubenswrapper[4834]: I0121 16:41:43.325159 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:41:43 crc kubenswrapper[4834]: E0121 16:41:43.325454 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:41:43 crc kubenswrapper[4834]: I0121 16:41:43.824040 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:43 crc kubenswrapper[4834]: I0121 16:41:43.824231 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:44 crc kubenswrapper[4834]: I0121 16:41:44.874397 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zr98j" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerName="registry-server" probeResult="failure" output=< Jan 21 16:41:44 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 16:41:44 crc kubenswrapper[4834]: > Jan 21 16:41:53 crc kubenswrapper[4834]: I0121 16:41:53.889505 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:53 crc kubenswrapper[4834]: I0121 16:41:53.939738 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:54 crc kubenswrapper[4834]: I0121 16:41:54.126874 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zr98j"] Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.108247 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zr98j" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerName="registry-server" containerID="cri-o://f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7" gracePeriod=2 Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.643338 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.703123 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dql79\" (UniqueName: \"kubernetes.io/projected/5b16d0ac-2657-4331-81a5-ed311c6db6a8-kube-api-access-dql79\") pod \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.703438 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-catalog-content\") pod \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.703683 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-utilities\") pod \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\" (UID: \"5b16d0ac-2657-4331-81a5-ed311c6db6a8\") " Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.704611 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-utilities" (OuterVolumeSpecName: "utilities") pod "5b16d0ac-2657-4331-81a5-ed311c6db6a8" (UID: "5b16d0ac-2657-4331-81a5-ed311c6db6a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.710512 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b16d0ac-2657-4331-81a5-ed311c6db6a8-kube-api-access-dql79" (OuterVolumeSpecName: "kube-api-access-dql79") pod "5b16d0ac-2657-4331-81a5-ed311c6db6a8" (UID: "5b16d0ac-2657-4331-81a5-ed311c6db6a8"). InnerVolumeSpecName "kube-api-access-dql79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.807981 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.808060 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dql79\" (UniqueName: \"kubernetes.io/projected/5b16d0ac-2657-4331-81a5-ed311c6db6a8-kube-api-access-dql79\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.831129 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b16d0ac-2657-4331-81a5-ed311c6db6a8" (UID: "5b16d0ac-2657-4331-81a5-ed311c6db6a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:41:55 crc kubenswrapper[4834]: I0121 16:41:55.910393 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b16d0ac-2657-4331-81a5-ed311c6db6a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.120822 4834 generic.go:334] "Generic (PLEG): container finished" podID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerID="f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7" exitCode=0 Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.120862 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr98j" event={"ID":"5b16d0ac-2657-4331-81a5-ed311c6db6a8","Type":"ContainerDied","Data":"f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7"} Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.120898 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr98j" event={"ID":"5b16d0ac-2657-4331-81a5-ed311c6db6a8","Type":"ContainerDied","Data":"e916a3aca2d8aab40a5f563cdc72ae8088fdb25b00378af0ff2f53cdd8996681"} Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.120917 4834 scope.go:117] "RemoveContainer" containerID="f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.120915 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zr98j" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.142522 4834 scope.go:117] "RemoveContainer" containerID="e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.165078 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zr98j"] Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.174790 4834 scope.go:117] "RemoveContainer" containerID="4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.178125 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zr98j"] Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.217640 4834 scope.go:117] "RemoveContainer" containerID="f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7" Jan 21 16:41:56 crc kubenswrapper[4834]: E0121 16:41:56.218094 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7\": container with ID starting with f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7 not found: ID does not exist" containerID="f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.218125 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7"} err="failed to get container status \"f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7\": rpc error: code = NotFound desc = could not find container \"f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7\": container with ID starting with f37cc00b650ef651f8c640d372a20f72478067bba584bcbde2a15074e363d3b7 not found: ID does not exist" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.218150 4834 scope.go:117] "RemoveContainer" containerID="e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec" Jan 21 16:41:56 crc kubenswrapper[4834]: E0121 16:41:56.218399 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec\": container with ID starting with e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec not found: ID does not exist" containerID="e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.218423 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec"} err="failed to get container status \"e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec\": rpc error: code = NotFound desc = could not find container \"e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec\": container with ID starting with e7ef91ccdea4ca98ae640adf08b2990833b95ba109c02026c66be9bdf5edceec not found: ID does not exist" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.218435 4834 scope.go:117] "RemoveContainer" containerID="4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f" Jan 21 16:41:56 crc kubenswrapper[4834]: E0121 16:41:56.218659 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f\": container with ID starting with 4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f not found: ID does not exist" containerID="4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.218685 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f"} err="failed to get container status \"4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f\": rpc error: code = NotFound desc = could not find container \"4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f\": container with ID starting with 4183d25ebd5ddfdaf041fed97cc6446fa89eb8e9d41a587a81321b37069b105f not found: ID does not exist" Jan 21 16:41:56 crc kubenswrapper[4834]: I0121 16:41:56.343663 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" path="/var/lib/kubelet/pods/5b16d0ac-2657-4331-81a5-ed311c6db6a8/volumes" Jan 21 16:41:57 crc kubenswrapper[4834]: I0121 16:41:57.325568 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:41:57 crc kubenswrapper[4834]: E0121 16:41:57.325978 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:42:12 crc kubenswrapper[4834]: I0121 16:42:12.324988 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:42:12 crc kubenswrapper[4834]: E0121 16:42:12.325879 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.141608 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-q499l/must-gather-vns2d"] Jan 21 16:42:18 crc kubenswrapper[4834]: E0121 16:42:18.142827 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerName="extract-utilities" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.142847 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerName="extract-utilities" Jan 21 16:42:18 crc kubenswrapper[4834]: E0121 16:42:18.142891 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerName="registry-server" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.142899 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerName="registry-server" Jan 21 16:42:18 crc kubenswrapper[4834]: E0121 16:42:18.142910 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerName="extract-content" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.142916 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerName="extract-content" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.143173 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b16d0ac-2657-4331-81a5-ed311c6db6a8" containerName="registry-server" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.144688 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/must-gather-vns2d" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.154908 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-q499l/must-gather-vns2d"] Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.157613 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-q499l"/"default-dockercfg-7fsl5" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.157773 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-q499l"/"kube-root-ca.crt" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.157869 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-q499l"/"openshift-service-ca.crt" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.214540 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd309b06-ea86-4cb4-a3a2-71abcee62842-must-gather-output\") pod \"must-gather-vns2d\" (UID: \"fd309b06-ea86-4cb4-a3a2-71abcee62842\") " pod="openshift-must-gather-q499l/must-gather-vns2d" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.214650 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n5xs\" (UniqueName: \"kubernetes.io/projected/fd309b06-ea86-4cb4-a3a2-71abcee62842-kube-api-access-8n5xs\") pod \"must-gather-vns2d\" (UID: \"fd309b06-ea86-4cb4-a3a2-71abcee62842\") " pod="openshift-must-gather-q499l/must-gather-vns2d" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.316949 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n5xs\" (UniqueName: \"kubernetes.io/projected/fd309b06-ea86-4cb4-a3a2-71abcee62842-kube-api-access-8n5xs\") pod \"must-gather-vns2d\" (UID: \"fd309b06-ea86-4cb4-a3a2-71abcee62842\") " pod="openshift-must-gather-q499l/must-gather-vns2d" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.317206 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd309b06-ea86-4cb4-a3a2-71abcee62842-must-gather-output\") pod \"must-gather-vns2d\" (UID: \"fd309b06-ea86-4cb4-a3a2-71abcee62842\") " pod="openshift-must-gather-q499l/must-gather-vns2d" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.317722 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd309b06-ea86-4cb4-a3a2-71abcee62842-must-gather-output\") pod \"must-gather-vns2d\" (UID: \"fd309b06-ea86-4cb4-a3a2-71abcee62842\") " pod="openshift-must-gather-q499l/must-gather-vns2d" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.352180 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n5xs\" (UniqueName: \"kubernetes.io/projected/fd309b06-ea86-4cb4-a3a2-71abcee62842-kube-api-access-8n5xs\") pod \"must-gather-vns2d\" (UID: \"fd309b06-ea86-4cb4-a3a2-71abcee62842\") " pod="openshift-must-gather-q499l/must-gather-vns2d" Jan 21 16:42:18 crc kubenswrapper[4834]: I0121 16:42:18.479719 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/must-gather-vns2d" Jan 21 16:42:19 crc kubenswrapper[4834]: I0121 16:42:19.053040 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-q499l/must-gather-vns2d"] Jan 21 16:42:19 crc kubenswrapper[4834]: I0121 16:42:19.338235 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q499l/must-gather-vns2d" event={"ID":"fd309b06-ea86-4cb4-a3a2-71abcee62842","Type":"ContainerStarted","Data":"5928197a614953f53f618a53c9de48a63387075120451055bd3101d4a84a948f"} Jan 21 16:42:23 crc kubenswrapper[4834]: I0121 16:42:23.326169 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:42:23 crc kubenswrapper[4834]: E0121 16:42:23.327008 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:42:28 crc kubenswrapper[4834]: I0121 16:42:28.436101 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q499l/must-gather-vns2d" event={"ID":"fd309b06-ea86-4cb4-a3a2-71abcee62842","Type":"ContainerStarted","Data":"c012544b576b6311a33c9d42972d2d64aba32fff5f53fb25e86e28cd00ca84a1"} Jan 21 16:42:28 crc kubenswrapper[4834]: I0121 16:42:28.436713 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q499l/must-gather-vns2d" event={"ID":"fd309b06-ea86-4cb4-a3a2-71abcee62842","Type":"ContainerStarted","Data":"abb33466abfc7090cff749e487454f6602f23e04addffaa834762a96fdaba4c0"} Jan 21 16:42:28 crc kubenswrapper[4834]: I0121 16:42:28.453519 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-q499l/must-gather-vns2d" podStartSLOduration=2.334694192 podStartE2EDuration="10.453496795s" podCreationTimestamp="2026-01-21 16:42:18 +0000 UTC" firstStartedPulling="2026-01-21 16:42:19.058963413 +0000 UTC m=+7885.033312458" lastFinishedPulling="2026-01-21 16:42:27.177766016 +0000 UTC m=+7893.152115061" observedRunningTime="2026-01-21 16:42:28.449092926 +0000 UTC m=+7894.423441981" watchObservedRunningTime="2026-01-21 16:42:28.453496795 +0000 UTC m=+7894.427845840" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.053220 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vzkgl"] Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.056573 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.083905 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzkgl"] Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.211648 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cclqg\" (UniqueName: \"kubernetes.io/projected/9575e90f-c113-4ab4-94c6-59136accb180-kube-api-access-cclqg\") pod \"redhat-marketplace-vzkgl\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.211715 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-catalog-content\") pod \"redhat-marketplace-vzkgl\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.211903 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-utilities\") pod \"redhat-marketplace-vzkgl\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.317346 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-utilities\") pod \"redhat-marketplace-vzkgl\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.317498 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cclqg\" (UniqueName: \"kubernetes.io/projected/9575e90f-c113-4ab4-94c6-59136accb180-kube-api-access-cclqg\") pod \"redhat-marketplace-vzkgl\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.317521 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-catalog-content\") pod \"redhat-marketplace-vzkgl\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.318095 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-catalog-content\") pod \"redhat-marketplace-vzkgl\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.318448 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-utilities\") pod \"redhat-marketplace-vzkgl\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.350612 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cclqg\" (UniqueName: \"kubernetes.io/projected/9575e90f-c113-4ab4-94c6-59136accb180-kube-api-access-cclqg\") pod \"redhat-marketplace-vzkgl\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:31 crc kubenswrapper[4834]: I0121 16:42:31.389717 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:32 crc kubenswrapper[4834]: I0121 16:42:32.041532 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzkgl"] Jan 21 16:42:32 crc kubenswrapper[4834]: W0121 16:42:32.047824 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9575e90f_c113_4ab4_94c6_59136accb180.slice/crio-5b010d979724f99f0a46909a0ab95af083dc3bbc5f2b30518658cbf3375b0283 WatchSource:0}: Error finding container 5b010d979724f99f0a46909a0ab95af083dc3bbc5f2b30518658cbf3375b0283: Status 404 returned error can't find the container with id 5b010d979724f99f0a46909a0ab95af083dc3bbc5f2b30518658cbf3375b0283 Jan 21 16:42:32 crc kubenswrapper[4834]: I0121 16:42:32.528817 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzkgl" event={"ID":"9575e90f-c113-4ab4-94c6-59136accb180","Type":"ContainerStarted","Data":"5b010d979724f99f0a46909a0ab95af083dc3bbc5f2b30518658cbf3375b0283"} Jan 21 16:42:33 crc kubenswrapper[4834]: I0121 16:42:33.539494 4834 generic.go:334] "Generic (PLEG): container finished" podID="9575e90f-c113-4ab4-94c6-59136accb180" containerID="bbfc43ee85b976aede5aba63660d76fc7c904a1343294e9febce57a3303a7e94" exitCode=0 Jan 21 16:42:33 crc kubenswrapper[4834]: I0121 16:42:33.540171 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzkgl" event={"ID":"9575e90f-c113-4ab4-94c6-59136accb180","Type":"ContainerDied","Data":"bbfc43ee85b976aede5aba63660d76fc7c904a1343294e9febce57a3303a7e94"} Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.396221 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-q499l/crc-debug-8j6c4"] Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.398839 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.426369 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5fb0117-0e37-42d3-8261-6a1d68ed4145-host\") pod \"crc-debug-8j6c4\" (UID: \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\") " pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.426583 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgspz\" (UniqueName: \"kubernetes.io/projected/c5fb0117-0e37-42d3-8261-6a1d68ed4145-kube-api-access-bgspz\") pod \"crc-debug-8j6c4\" (UID: \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\") " pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.528773 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5fb0117-0e37-42d3-8261-6a1d68ed4145-host\") pod \"crc-debug-8j6c4\" (UID: \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\") " pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.528910 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5fb0117-0e37-42d3-8261-6a1d68ed4145-host\") pod \"crc-debug-8j6c4\" (UID: \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\") " pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.528997 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgspz\" (UniqueName: \"kubernetes.io/projected/c5fb0117-0e37-42d3-8261-6a1d68ed4145-kube-api-access-bgspz\") pod \"crc-debug-8j6c4\" (UID: \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\") " pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.551628 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgspz\" (UniqueName: \"kubernetes.io/projected/c5fb0117-0e37-42d3-8261-6a1d68ed4145-kube-api-access-bgspz\") pod \"crc-debug-8j6c4\" (UID: \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\") " pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.553121 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzkgl" event={"ID":"9575e90f-c113-4ab4-94c6-59136accb180","Type":"ContainerStarted","Data":"53f8c73a37675553f4c49c3d435d4242e90126fe47cb116d2cf832859703a26c"} Jan 21 16:42:34 crc kubenswrapper[4834]: I0121 16:42:34.726507 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:42:34 crc kubenswrapper[4834]: W0121 16:42:34.757006 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5fb0117_0e37_42d3_8261_6a1d68ed4145.slice/crio-620d04bfdfe4e2273d660c2da09ba29fe4681ca6a6e397a01a80c52e14e62af7 WatchSource:0}: Error finding container 620d04bfdfe4e2273d660c2da09ba29fe4681ca6a6e397a01a80c52e14e62af7: Status 404 returned error can't find the container with id 620d04bfdfe4e2273d660c2da09ba29fe4681ca6a6e397a01a80c52e14e62af7 Jan 21 16:42:35 crc kubenswrapper[4834]: I0121 16:42:35.581205 4834 generic.go:334] "Generic (PLEG): container finished" podID="9575e90f-c113-4ab4-94c6-59136accb180" containerID="53f8c73a37675553f4c49c3d435d4242e90126fe47cb116d2cf832859703a26c" exitCode=0 Jan 21 16:42:35 crc kubenswrapper[4834]: I0121 16:42:35.581540 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzkgl" event={"ID":"9575e90f-c113-4ab4-94c6-59136accb180","Type":"ContainerDied","Data":"53f8c73a37675553f4c49c3d435d4242e90126fe47cb116d2cf832859703a26c"} Jan 21 16:42:35 crc kubenswrapper[4834]: I0121 16:42:35.584523 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q499l/crc-debug-8j6c4" event={"ID":"c5fb0117-0e37-42d3-8261-6a1d68ed4145","Type":"ContainerStarted","Data":"620d04bfdfe4e2273d660c2da09ba29fe4681ca6a6e397a01a80c52e14e62af7"} Jan 21 16:42:36 crc kubenswrapper[4834]: I0121 16:42:36.325018 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:42:36 crc kubenswrapper[4834]: E0121 16:42:36.325803 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:42:36 crc kubenswrapper[4834]: I0121 16:42:36.600647 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzkgl" event={"ID":"9575e90f-c113-4ab4-94c6-59136accb180","Type":"ContainerStarted","Data":"62427e723d6a8df2823dec4036e0b86905e9a9fd77dd54c11e89a27c296d541b"} Jan 21 16:42:36 crc kubenswrapper[4834]: I0121 16:42:36.633363 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vzkgl" podStartSLOduration=3.172986468 podStartE2EDuration="5.633333314s" podCreationTimestamp="2026-01-21 16:42:31 +0000 UTC" firstStartedPulling="2026-01-21 16:42:33.541944307 +0000 UTC m=+7899.516293352" lastFinishedPulling="2026-01-21 16:42:36.002291163 +0000 UTC m=+7901.976640198" observedRunningTime="2026-01-21 16:42:36.620087778 +0000 UTC m=+7902.594436843" watchObservedRunningTime="2026-01-21 16:42:36.633333314 +0000 UTC m=+7902.607682359" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.277561 4834 ???:1] "http: TLS handshake error from 192.168.126.11:44030: EOF" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.283208 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d/alertmanager/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.305992 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d/config-reloader/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.313311 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0d4d3dce-8ceb-4c4d-883b-bf91e68f5c7d/init-config-reloader/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.355291 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ba644516-d083-404a-b5fc-6b5589098b4a/aodh-api/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.408994 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ba644516-d083-404a-b5fc-6b5589098b4a/aodh-evaluator/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.416059 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ba644516-d083-404a-b5fc-6b5589098b4a/aodh-notifier/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.553985 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ba644516-d083-404a-b5fc-6b5589098b4a/aodh-listener/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.578652 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbcdf9bfb-bhr5k_a4bbd0de-67a6-4383-b3f8-df8eebba1442/barbican-api-log/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.585723 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fbcdf9bfb-bhr5k_a4bbd0de-67a6-4383-b3f8-df8eebba1442/barbican-api/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.617612 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65dd49b774-nw7wl_b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7/barbican-keystone-listener-log/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.626030 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65dd49b774-nw7wl_b7f83cf9-6b27-41e3-bad4-b5fe6e3202e7/barbican-keystone-listener/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.646004 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f7f7bcbb9-k58jh_80665cd8-04f6-4768-ba52-98354aad364d/barbican-worker-log/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.653186 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f7f7bcbb9-k58jh_80665cd8-04f6-4768-ba52-98354aad364d/barbican-worker/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.701457 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-sgmzf_3fc69335-1de8-4e41-a128-cc4f162719f1/bootstrap-openstack-openstack-cell1/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.735861 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee/ceilometer-central-agent/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.948500 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee/ceilometer-notification-agent/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.955968 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee/sg-core/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.973176 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9b79d62d-b7f4-4789-b9a0-53fc8a42b3ee/proxy-httpd/0.log" Jan 21 16:42:37 crc kubenswrapper[4834]: I0121 16:42:37.994634 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_09e4f954-89a6-4faf-9021-0e848b28c7b4/cinder-api-log/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.034210 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_09e4f954-89a6-4faf-9021-0e848b28c7b4/cinder-api/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.276892 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_bba744ba-b8e9-46e1-a4b9-95e30841864d/cinder-backup/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.300622 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_bba744ba-b8e9-46e1-a4b9-95e30841864d/probe/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.376628 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2393786a-fa47-4d59-94a0-ec0e73f54392/cinder-scheduler/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.405946 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2393786a-fa47-4d59-94a0-ec0e73f54392/probe/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.509211 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_b18a40a5-be9d-43a9-a420-52b71cf421b9/cinder-volume/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.541386 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_b18a40a5-be9d-43a9-a420-52b71cf421b9/probe/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.613017 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-jvtmv_ff895217-df84-4fe6-a466-7f38685ed926/configure-network-openstack-openstack-cell1/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.809848 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-fntcc_c3577c85-fa53-4981-a641-9ca95c0fa877/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.852336 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-fxvzg_aa34a899-92ec-4e6e-98e2-090a64b4a8c9/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.876665 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-mcc9m_ab3376df-6bc1-4e54-ac05-28d2db9d2486/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.905994 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-s6bqt_83e06af3-bd7e-412e-b471-e60df89514df/configure-os-openstack-openstack-cell1/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.926018 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-68679f7d8c-khwr5_5f1d6102-14fe-4d09-ad69-41f0f3405fdc/dnsmasq-dns/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.939459 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-68679f7d8c-khwr5_5f1d6102-14fe-4d09-ad69-41f0f3405fdc/init/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.968296 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-xgmnn_2dfefb4c-526c-45fc-a398-41a79b28ac0b/download-cache-openstack-openstack-cell1/0.log" Jan 21 16:42:38 crc kubenswrapper[4834]: I0121 16:42:38.988807 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_187f9c99-b482-4a74-9bef-7017e691f1e2/glance-log/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.005311 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_187f9c99-b482-4a74-9bef-7017e691f1e2/glance-httpd/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.016285 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_debc9a9a-02de-46bd-ad18-a8c5527e7bc2/glance-log/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.068478 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_debc9a9a-02de-46bd-ad18-a8c5527e7bc2/glance-httpd/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.140285 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7d548bb56b-54t7m_b6953411-1bd9-482c-a7c7-37885a856ec3/heat-api/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.221593 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6b7dc4444b-95qgh_a5917c3a-9e06-404b-8efc-a52457ca4625/heat-cfnapi/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.248419 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-8576c9d7cb-7mkvb_2e806442-8e14-4797-8b36-1dc85c99ace6/heat-engine/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.331465 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6786d6bdf9-hpfpt_7432cb83-a1c7-4a08-ad0a-f6690e07aee2/horizon-log/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.422986 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6786d6bdf9-hpfpt_7432cb83-a1c7-4a08-ad0a-f6690e07aee2/horizon/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.461475 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-k655t_0cef0c11-c99e-4c8a-adc8-b9ee149a0660/install-os-openstack-openstack-cell1/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.618394 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-75d69fc98c-bjw7h_e6f9b2f5-81d2-4c8a-b4c3-c0c31c75678b/keystone-api/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.630003 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483521-9g5jz_af6b62a3-7329-456a-8ae7-bbd111be156c/keystone-cron/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.640490 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3804ae5f-4153-4f31-b533-79d534c7e9a3/kube-state-metrics/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.656582 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_44fe3c57-e369-4ae0-ab19-2dcbb1179714/manila-api-log/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.773398 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_44fe3c57-e369-4ae0-ab19-2dcbb1179714/manila-api/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.844369 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_629902f5-4954-484d-ba2d-a1c356bd7c68/manila-scheduler/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.853877 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_629902f5-4954-484d-ba2d-a1c356bd7c68/probe/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.906374 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f0747c6b-3d55-4fd7-afa8-2bdac4a772c4/manila-share/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.920217 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f0747c6b-3d55-4fd7-afa8-2bdac4a772c4/probe/0.log" Jan 21 16:42:39 crc kubenswrapper[4834]: I0121 16:42:39.929045 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_f0dd0aa2-a14e-499d-8a95-4372fcc8fcbf/adoption/0.log" Jan 21 16:42:41 crc kubenswrapper[4834]: I0121 16:42:41.390036 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:41 crc kubenswrapper[4834]: I0121 16:42:41.391876 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:41 crc kubenswrapper[4834]: I0121 16:42:41.446363 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:41 crc kubenswrapper[4834]: I0121 16:42:41.769881 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:41 crc kubenswrapper[4834]: I0121 16:42:41.839213 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzkgl"] Jan 21 16:42:43 crc kubenswrapper[4834]: I0121 16:42:43.691799 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vzkgl" podUID="9575e90f-c113-4ab4-94c6-59136accb180" containerName="registry-server" containerID="cri-o://62427e723d6a8df2823dec4036e0b86905e9a9fd77dd54c11e89a27c296d541b" gracePeriod=2 Jan 21 16:42:44 crc kubenswrapper[4834]: I0121 16:42:44.714645 4834 generic.go:334] "Generic (PLEG): container finished" podID="9575e90f-c113-4ab4-94c6-59136accb180" containerID="62427e723d6a8df2823dec4036e0b86905e9a9fd77dd54c11e89a27c296d541b" exitCode=0 Jan 21 16:42:44 crc kubenswrapper[4834]: I0121 16:42:44.714869 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzkgl" event={"ID":"9575e90f-c113-4ab4-94c6-59136accb180","Type":"ContainerDied","Data":"62427e723d6a8df2823dec4036e0b86905e9a9fd77dd54c11e89a27c296d541b"} Jan 21 16:42:48 crc kubenswrapper[4834]: I0121 16:42:48.584300 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ed90e2cf-be92-46d4-b3f0-ef730606de1c/memcached/0.log" Jan 21 16:42:48 crc kubenswrapper[4834]: I0121 16:42:48.722898 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dbb78977-9b4b4_cfaf782e-b5bf-4675-ba2d-9d0777b68260/neutron-api/0.log" Jan 21 16:42:48 crc kubenswrapper[4834]: I0121 16:42:48.759338 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6dbb78977-9b4b4_cfaf782e-b5bf-4675-ba2d-9d0777b68260/neutron-httpd/0.log" Jan 21 16:42:48 crc kubenswrapper[4834]: I0121 16:42:48.872797 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4317aa50-b40b-4725-ac51-62d674c1a05c/nova-api-log/0.log" Jan 21 16:42:48 crc kubenswrapper[4834]: I0121 16:42:48.959986 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4317aa50-b40b-4725-ac51-62d674c1a05c/nova-api-api/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.050295 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_62b033f2-d854-4c8b-870b-bb9ad473ec3b/nova-cell0-conductor-conductor/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.125502 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f99d8448-b419-4217-9191-492bb9d4bd74/nova-cell1-conductor-conductor/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.190427 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_84894834-5968-45e4-af2d-e13a2539d13e/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.263197 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e614d11c-ce9d-42e2-8805-d3a0da859e7f/nova-metadata-log/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.368401 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e614d11c-ce9d-42e2-8805-d3a0da859e7f/nova-metadata-metadata/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.474341 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_77e8adf4-e276-4c90-b0c6-59f8806a0fc9/nova-scheduler-scheduler/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.640153 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6d7f5574f4-nnc6m_cf162c87-ffd6-4a17-8ddf-d16cd28bfaca/octavia-api/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.664477 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6d7f5574f4-nnc6m_cf162c87-ffd6-4a17-8ddf-d16cd28bfaca/octavia-api-provider-agent/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.680033 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-6d7f5574f4-nnc6m_cf162c87-ffd6-4a17-8ddf-d16cd28bfaca/init/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.751296 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-bf9c8_e34a7056-efc5-465b-b5d2-80193f49b73f/octavia-healthmanager/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.875801 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-bf9c8_e34a7056-efc5-465b-b5d2-80193f49b73f/init/0.log" Jan 21 16:42:49 crc kubenswrapper[4834]: I0121 16:42:49.909127 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9gpf9_f26b39f3-1325-485e-a56a-59cd626ceb94/octavia-housekeeping/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.058460 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9gpf9_f26b39f3-1325-485e-a56a-59cd626ceb94/init/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.070651 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-9gdbz_c78f244f-a953-49ad-b632-96f0ec0f75ee/octavia-rsyslog/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.107131 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-9gdbz_c78f244f-a953-49ad-b632-96f0ec0f75ee/init/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.250597 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-k8p58_660b1ee6-aacb-4c91-a79d-f74b4ebfb640/octavia-worker/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.266756 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-k8p58_660b1ee6-aacb-4c91-a79d-f74b4ebfb640/init/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.295210 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4fcc39f6-a0b5-404d-a8c6-329408e95823/galera/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.306591 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4fcc39f6-a0b5-404d-a8c6-329408e95823/mysql-bootstrap/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.367141 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb09e7ff-f752-4f08-adfb-8bdee7a815fd/galera/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.389804 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eb09e7ff-f752-4f08-adfb-8bdee7a815fd/mysql-bootstrap/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.403268 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f7e09cad-3c34-40a0-86d1-18dda726ffd1/openstackclient/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.419638 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fgmph_427e0c14-b79d-43fc-b5b5-ad41b83d9988/openstack-network-exporter/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.436112 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cg8v_fa38c942-b967-4029-8c2f-e7c54ab9cedb/ovsdb-server/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.456031 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cg8v_fa38c942-b967-4029-8c2f-e7c54ab9cedb/ovs-vswitchd/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.466140 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cg8v_fa38c942-b967-4029-8c2f-e7c54ab9cedb/ovsdb-server-init/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.480964 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ps9ls_fa07ecbb-45e4-4dd9-b3c8-91131d7fcfd0/ovn-controller/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.490380 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_da3ead55-f584-4ef3-aa5f-799e103b68db/adoption/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.509820 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5775a7ad-6dc5-4cfb-8f00-302c15dedfac/ovn-northd/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.527785 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5775a7ad-6dc5-4cfb-8f00-302c15dedfac/openstack-network-exporter/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.542908 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_70bb6d32-c183-4cad-9cfa-6b40f80551d8/ovsdbserver-nb/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.554341 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_70bb6d32-c183-4cad-9cfa-6b40f80551d8/openstack-network-exporter/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.584080 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3/ovsdbserver-nb/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.592455 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_3d5f827c-3861-4d8d-a72b-7a9f0fe75ea3/openstack-network-exporter/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.633715 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_e3cf6365-be8d-41d7-a56e-03259a20a210/ovsdbserver-nb/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.646815 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_e3cf6365-be8d-41d7-a56e-03259a20a210/openstack-network-exporter/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.686823 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2a0c44c8-44bf-4166-a864-c2b993c2e042/ovsdbserver-sb/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.702068 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2a0c44c8-44bf-4166-a864-c2b993c2e042/openstack-network-exporter/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.720030 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_e7dd5896-6634-4d8d-b95b-88138f750ca6/ovsdbserver-sb/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.731595 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_e7dd5896-6634-4d8d-b95b-88138f750ca6/openstack-network-exporter/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.751275 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_30586894-d62b-41e6-b0aa-32a52c74c2d4/ovsdbserver-sb/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.764447 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_30586894-d62b-41e6-b0aa-32a52c74c2d4/openstack-network-exporter/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.811730 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vzkgl" event={"ID":"9575e90f-c113-4ab4-94c6-59136accb180","Type":"ContainerDied","Data":"5b010d979724f99f0a46909a0ab95af083dc3bbc5f2b30518658cbf3375b0283"} Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.811769 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b010d979724f99f0a46909a0ab95af083dc3bbc5f2b30518658cbf3375b0283" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.819507 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-846f6d55db-7dwsr_ca65b952-f3c1-4b89-b0ba-63085e701161/placement-log/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.850791 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-846f6d55db-7dwsr_ca65b952-f3c1-4b89-b0ba-63085e701161/placement-api/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.866455 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c9k5x8_5cb33b06-57dd-4368-a7cb-97c2b214f2f2/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.881910 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f4d8624c-870c-4ec9-bbbe-2cb20ed149df/prometheus/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.891602 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.893700 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f4d8624c-870c-4ec9-bbbe-2cb20ed149df/config-reloader/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.906645 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f4d8624c-870c-4ec9-bbbe-2cb20ed149df/thanos-sidecar/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.920008 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f4d8624c-870c-4ec9-bbbe-2cb20ed149df/init-config-reloader/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.938174 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b925ea00-8dd3-4ced-add2-41483f9b4d63/rabbitmq/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.951028 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b925ea00-8dd3-4ced-add2-41483f9b4d63/setup-container/0.log" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.959715 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-catalog-content\") pod \"9575e90f-c113-4ab4-94c6-59136accb180\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.959831 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cclqg\" (UniqueName: \"kubernetes.io/projected/9575e90f-c113-4ab4-94c6-59136accb180-kube-api-access-cclqg\") pod \"9575e90f-c113-4ab4-94c6-59136accb180\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.960054 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-utilities\") pod \"9575e90f-c113-4ab4-94c6-59136accb180\" (UID: \"9575e90f-c113-4ab4-94c6-59136accb180\") " Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.961737 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-utilities" (OuterVolumeSpecName: "utilities") pod "9575e90f-c113-4ab4-94c6-59136accb180" (UID: "9575e90f-c113-4ab4-94c6-59136accb180"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.969686 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9575e90f-c113-4ab4-94c6-59136accb180-kube-api-access-cclqg" (OuterVolumeSpecName: "kube-api-access-cclqg") pod "9575e90f-c113-4ab4-94c6-59136accb180" (UID: "9575e90f-c113-4ab4-94c6-59136accb180"). InnerVolumeSpecName "kube-api-access-cclqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:42:50 crc kubenswrapper[4834]: I0121 16:42:50.997944 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9575e90f-c113-4ab4-94c6-59136accb180" (UID: "9575e90f-c113-4ab4-94c6-59136accb180"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.043417 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5b94bf96-a478-4e6d-adce-ef88ef069f9a/rabbitmq/0.log" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.049979 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_5b94bf96-a478-4e6d-adce-ef88ef069f9a/setup-container/0.log" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.061625 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.061651 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cclqg\" (UniqueName: \"kubernetes.io/projected/9575e90f-c113-4ab4-94c6-59136accb180-kube-api-access-cclqg\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.061660 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9575e90f-c113-4ab4-94c6-59136accb180-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.096496 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-xql62_9d206bcf-791e-4e8e-bddf-faf2365abf8c/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.107946 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-xn7mt_0f96a41b-ab9c-43e0-be2e-e54502919aeb/validate-network-openstack-openstack-cell1/0.log" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.324535 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:42:51 crc kubenswrapper[4834]: E0121 16:42:51.325209 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.846678 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vzkgl" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.846905 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q499l/crc-debug-8j6c4" event={"ID":"c5fb0117-0e37-42d3-8261-6a1d68ed4145","Type":"ContainerStarted","Data":"f87b61f415c61d2153ffc06db88073fc07529cbaaaad496bc2e3bad619cf714a"} Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.875403 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-q499l/crc-debug-8j6c4" podStartSLOduration=2.152444501 podStartE2EDuration="17.87537645s" podCreationTimestamp="2026-01-21 16:42:34 +0000 UTC" firstStartedPulling="2026-01-21 16:42:34.761597385 +0000 UTC m=+7900.735946430" lastFinishedPulling="2026-01-21 16:42:50.484529324 +0000 UTC m=+7916.458878379" observedRunningTime="2026-01-21 16:42:51.865106127 +0000 UTC m=+7917.839455172" watchObservedRunningTime="2026-01-21 16:42:51.87537645 +0000 UTC m=+7917.849725495" Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.905453 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzkgl"] Jan 21 16:42:51 crc kubenswrapper[4834]: I0121 16:42:51.920034 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vzkgl"] Jan 21 16:42:52 crc kubenswrapper[4834]: I0121 16:42:52.345353 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9575e90f-c113-4ab4-94c6-59136accb180" path="/var/lib/kubelet/pods/9575e90f-c113-4ab4-94c6-59136accb180/volumes" Jan 21 16:43:02 crc kubenswrapper[4834]: I0121 16:43:02.522854 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4qs4k_fdcc3bcf-ed3c-42b3-aaed-02aedc639655/controller/0.log" Jan 21 16:43:02 crc kubenswrapper[4834]: I0121 16:43:02.529649 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4qs4k_fdcc3bcf-ed3c-42b3-aaed-02aedc639655/kube-rbac-proxy/0.log" Jan 21 16:43:02 crc kubenswrapper[4834]: I0121 16:43:02.552728 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/controller/0.log" Jan 21 16:43:03 crc kubenswrapper[4834]: I0121 16:43:03.324741 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:43:03 crc kubenswrapper[4834]: E0121 16:43:03.325467 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.585239 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/frr/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.596642 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/reloader/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.602862 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/frr-metrics/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.611189 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/kube-rbac-proxy/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.619760 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/kube-rbac-proxy-frr/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.641894 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/cp-frr-files/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.651526 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/cp-reloader/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.658405 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/cp-metrics/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.675496 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8g2fp_2dc8a2ee-3729-4765-86aa-4f9b89a00c79/frr-k8s-webhook-server/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.705813 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-9f5f6b6d-8k84x_2e030c1c-2b95-4ea1-a9be-91a707b92e15/manager/0.log" Jan 21 16:43:06 crc kubenswrapper[4834]: I0121 16:43:06.715825 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-794b596549-wm7g7_09ca80fd-b29f-4eca-8ad2-28f29cb91e78/webhook-server/0.log" Jan 21 16:43:07 crc kubenswrapper[4834]: I0121 16:43:07.509515 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jqkhx_fa1a9082-c606-4634-9715-1b81c9f0137f/speaker/0.log" Jan 21 16:43:07 crc kubenswrapper[4834]: I0121 16:43:07.516615 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jqkhx_fa1a9082-c606-4634-9715-1b81c9f0137f/kube-rbac-proxy/0.log" Jan 21 16:43:08 crc kubenswrapper[4834]: I0121 16:43:08.027311 4834 generic.go:334] "Generic (PLEG): container finished" podID="c5fb0117-0e37-42d3-8261-6a1d68ed4145" containerID="f87b61f415c61d2153ffc06db88073fc07529cbaaaad496bc2e3bad619cf714a" exitCode=0 Jan 21 16:43:08 crc kubenswrapper[4834]: I0121 16:43:08.027356 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q499l/crc-debug-8j6c4" event={"ID":"c5fb0117-0e37-42d3-8261-6a1d68ed4145","Type":"ContainerDied","Data":"f87b61f415c61d2153ffc06db88073fc07529cbaaaad496bc2e3bad619cf714a"} Jan 21 16:43:09 crc kubenswrapper[4834]: I0121 16:43:09.167715 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:43:09 crc kubenswrapper[4834]: I0121 16:43:09.213252 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-q499l/crc-debug-8j6c4"] Jan 21 16:43:09 crc kubenswrapper[4834]: I0121 16:43:09.223893 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-q499l/crc-debug-8j6c4"] Jan 21 16:43:09 crc kubenswrapper[4834]: I0121 16:43:09.326038 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5fb0117-0e37-42d3-8261-6a1d68ed4145-host\") pod \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\" (UID: \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\") " Jan 21 16:43:09 crc kubenswrapper[4834]: I0121 16:43:09.326121 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgspz\" (UniqueName: \"kubernetes.io/projected/c5fb0117-0e37-42d3-8261-6a1d68ed4145-kube-api-access-bgspz\") pod \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\" (UID: \"c5fb0117-0e37-42d3-8261-6a1d68ed4145\") " Jan 21 16:43:09 crc kubenswrapper[4834]: I0121 16:43:09.326159 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5fb0117-0e37-42d3-8261-6a1d68ed4145-host" (OuterVolumeSpecName: "host") pod "c5fb0117-0e37-42d3-8261-6a1d68ed4145" (UID: "c5fb0117-0e37-42d3-8261-6a1d68ed4145"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:43:09 crc kubenswrapper[4834]: I0121 16:43:09.327561 4834 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5fb0117-0e37-42d3-8261-6a1d68ed4145-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:09 crc kubenswrapper[4834]: I0121 16:43:09.342314 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5fb0117-0e37-42d3-8261-6a1d68ed4145-kube-api-access-bgspz" (OuterVolumeSpecName: "kube-api-access-bgspz") pod "c5fb0117-0e37-42d3-8261-6a1d68ed4145" (UID: "c5fb0117-0e37-42d3-8261-6a1d68ed4145"). InnerVolumeSpecName "kube-api-access-bgspz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:43:09 crc kubenswrapper[4834]: I0121 16:43:09.429610 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgspz\" (UniqueName: \"kubernetes.io/projected/c5fb0117-0e37-42d3-8261-6a1d68ed4145-kube-api-access-bgspz\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.050353 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="620d04bfdfe4e2273d660c2da09ba29fe4681ca6a6e397a01a80c52e14e62af7" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.050421 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/crc-debug-8j6c4" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.341252 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5fb0117-0e37-42d3-8261-6a1d68ed4145" path="/var/lib/kubelet/pods/c5fb0117-0e37-42d3-8261-6a1d68ed4145/volumes" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.378949 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-q499l/crc-debug-k7f7g"] Jan 21 16:43:10 crc kubenswrapper[4834]: E0121 16:43:10.379470 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9575e90f-c113-4ab4-94c6-59136accb180" containerName="extract-utilities" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.379493 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9575e90f-c113-4ab4-94c6-59136accb180" containerName="extract-utilities" Jan 21 16:43:10 crc kubenswrapper[4834]: E0121 16:43:10.379533 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9575e90f-c113-4ab4-94c6-59136accb180" containerName="extract-content" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.379552 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9575e90f-c113-4ab4-94c6-59136accb180" containerName="extract-content" Jan 21 16:43:10 crc kubenswrapper[4834]: E0121 16:43:10.379574 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9575e90f-c113-4ab4-94c6-59136accb180" containerName="registry-server" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.379582 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="9575e90f-c113-4ab4-94c6-59136accb180" containerName="registry-server" Jan 21 16:43:10 crc kubenswrapper[4834]: E0121 16:43:10.379595 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5fb0117-0e37-42d3-8261-6a1d68ed4145" containerName="container-00" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.379603 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5fb0117-0e37-42d3-8261-6a1d68ed4145" containerName="container-00" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.379859 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="9575e90f-c113-4ab4-94c6-59136accb180" containerName="registry-server" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.379885 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5fb0117-0e37-42d3-8261-6a1d68ed4145" containerName="container-00" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.384107 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.552585 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9dhk\" (UniqueName: \"kubernetes.io/projected/2de786bc-1544-4ca5-9070-1c7aee9ca81b-kube-api-access-j9dhk\") pod \"crc-debug-k7f7g\" (UID: \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\") " pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.552722 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2de786bc-1544-4ca5-9070-1c7aee9ca81b-host\") pod \"crc-debug-k7f7g\" (UID: \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\") " pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.654655 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9dhk\" (UniqueName: \"kubernetes.io/projected/2de786bc-1544-4ca5-9070-1c7aee9ca81b-kube-api-access-j9dhk\") pod \"crc-debug-k7f7g\" (UID: \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\") " pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.654798 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2de786bc-1544-4ca5-9070-1c7aee9ca81b-host\") pod \"crc-debug-k7f7g\" (UID: \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\") " pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.654984 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2de786bc-1544-4ca5-9070-1c7aee9ca81b-host\") pod \"crc-debug-k7f7g\" (UID: \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\") " pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.676870 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9dhk\" (UniqueName: \"kubernetes.io/projected/2de786bc-1544-4ca5-9070-1c7aee9ca81b-kube-api-access-j9dhk\") pod \"crc-debug-k7f7g\" (UID: \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\") " pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:10 crc kubenswrapper[4834]: I0121 16:43:10.721649 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:10 crc kubenswrapper[4834]: W0121 16:43:10.778980 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2de786bc_1544_4ca5_9070_1c7aee9ca81b.slice/crio-2cbe2196ff75e00a1247fe5fef83428a50cbae96a7ece1094915f39f4c7f5b3f WatchSource:0}: Error finding container 2cbe2196ff75e00a1247fe5fef83428a50cbae96a7ece1094915f39f4c7f5b3f: Status 404 returned error can't find the container with id 2cbe2196ff75e00a1247fe5fef83428a50cbae96a7ece1094915f39f4c7f5b3f Jan 21 16:43:11 crc kubenswrapper[4834]: I0121 16:43:11.061465 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q499l/crc-debug-k7f7g" event={"ID":"2de786bc-1544-4ca5-9070-1c7aee9ca81b","Type":"ContainerStarted","Data":"2cbe2196ff75e00a1247fe5fef83428a50cbae96a7ece1094915f39f4c7f5b3f"} Jan 21 16:43:12 crc kubenswrapper[4834]: I0121 16:43:12.072181 4834 generic.go:334] "Generic (PLEG): container finished" podID="2de786bc-1544-4ca5-9070-1c7aee9ca81b" containerID="e129a46f5722a8f928bdb0a561f56528a2d9b84f8e1ddbe288cb1c104b4a0844" exitCode=1 Jan 21 16:43:12 crc kubenswrapper[4834]: I0121 16:43:12.072254 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-q499l/crc-debug-k7f7g" event={"ID":"2de786bc-1544-4ca5-9070-1c7aee9ca81b","Type":"ContainerDied","Data":"e129a46f5722a8f928bdb0a561f56528a2d9b84f8e1ddbe288cb1c104b4a0844"} Jan 21 16:43:12 crc kubenswrapper[4834]: I0121 16:43:12.126773 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-q499l/crc-debug-k7f7g"] Jan 21 16:43:12 crc kubenswrapper[4834]: I0121 16:43:12.137480 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-q499l/crc-debug-k7f7g"] Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.200293 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.312634 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2de786bc-1544-4ca5-9070-1c7aee9ca81b-host\") pod \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\" (UID: \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\") " Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.312767 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de786bc-1544-4ca5-9070-1c7aee9ca81b-host" (OuterVolumeSpecName: "host") pod "2de786bc-1544-4ca5-9070-1c7aee9ca81b" (UID: "2de786bc-1544-4ca5-9070-1c7aee9ca81b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.312822 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9dhk\" (UniqueName: \"kubernetes.io/projected/2de786bc-1544-4ca5-9070-1c7aee9ca81b-kube-api-access-j9dhk\") pod \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\" (UID: \"2de786bc-1544-4ca5-9070-1c7aee9ca81b\") " Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.313508 4834 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2de786bc-1544-4ca5-9070-1c7aee9ca81b-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.327795 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de786bc-1544-4ca5-9070-1c7aee9ca81b-kube-api-access-j9dhk" (OuterVolumeSpecName: "kube-api-access-j9dhk") pod "2de786bc-1544-4ca5-9070-1c7aee9ca81b" (UID: "2de786bc-1544-4ca5-9070-1c7aee9ca81b"). InnerVolumeSpecName "kube-api-access-j9dhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.416327 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9dhk\" (UniqueName: \"kubernetes.io/projected/2de786bc-1544-4ca5-9070-1c7aee9ca81b-kube-api-access-j9dhk\") on node \"crc\" DevicePath \"\"" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.449519 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv_f23fb8fd-32c6-45d6-a0d8-97555eccacc6/extract/0.log" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.461418 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv_f23fb8fd-32c6-45d6-a0d8-97555eccacc6/util/0.log" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.471716 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv_f23fb8fd-32c6-45d6-a0d8-97555eccacc6/pull/0.log" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.594115 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-v7r86_ac1f15b1-9c34-49e4-957a-74a950b6583f/manager/0.log" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.687353 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-qdt7x_00cc8ba5-f3f5-42e9-a23a-6c3b1989763b/manager/0.log" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.699528 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dfmb5_2bf8d583-f505-436a-a60f-ec418f4d5e94/manager/0.log" Jan 21 16:43:13 crc kubenswrapper[4834]: I0121 16:43:13.950317 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-x56p7_dded3929-2919-4903-8465-da99004a3cd6/manager/0.log" Jan 21 16:43:14 crc kubenswrapper[4834]: I0121 16:43:14.005616 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-2dj64_ae8ff7f9-4d1b-4562-a307-f9ad95966c48/manager/0.log" Jan 21 16:43:14 crc kubenswrapper[4834]: I0121 16:43:14.128843 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-jvfvr_c181377f-ebc2-4ebf-ba81-71d6609d37c4/manager/0.log" Jan 21 16:43:14 crc kubenswrapper[4834]: I0121 16:43:14.143185 4834 scope.go:117] "RemoveContainer" containerID="e129a46f5722a8f928bdb0a561f56528a2d9b84f8e1ddbe288cb1c104b4a0844" Jan 21 16:43:14 crc kubenswrapper[4834]: I0121 16:43:14.143336 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-q499l/crc-debug-k7f7g" Jan 21 16:43:14 crc kubenswrapper[4834]: I0121 16:43:14.348355 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2de786bc-1544-4ca5-9070-1c7aee9ca81b" path="/var/lib/kubelet/pods/2de786bc-1544-4ca5-9070-1c7aee9ca81b/volumes" Jan 21 16:43:14 crc kubenswrapper[4834]: I0121 16:43:14.758717 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-bwn92_8b4d1ded-fcb3-456e-9704-776079ec120f/manager/0.log" Jan 21 16:43:14 crc kubenswrapper[4834]: I0121 16:43:14.891394 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-d5jml_4d010ccc-9dc9-4d66-9544-354bc82380ca/manager/0.log" Jan 21 16:43:15 crc kubenswrapper[4834]: I0121 16:43:15.068156 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-lczqr_aa317ccb-4317-40f7-8661-20a62f36dd97/manager/0.log" Jan 21 16:43:15 crc kubenswrapper[4834]: I0121 16:43:15.101824 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-qmndz_f3592517-3af0-41fd-bd16-da41fb583656/manager/0.log" Jan 21 16:43:15 crc kubenswrapper[4834]: I0121 16:43:15.160669 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-bdrss_98485c62-7908-4ca9-8436-71dd52f371df/manager/0.log" Jan 21 16:43:15 crc kubenswrapper[4834]: I0121 16:43:15.233156 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-nsvnj_d2a8e7a2-c785-48e3-a143-30c89c49fe36/manager/0.log" Jan 21 16:43:15 crc kubenswrapper[4834]: I0121 16:43:15.569415 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-7ww2d_8728d4cf-ef2f-4e71-875f-6227ac7117db/manager/0.log" Jan 21 16:43:15 crc kubenswrapper[4834]: I0121 16:43:15.626349 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-whz74_ff5898ae-a2fa-4ed4-9e56-74a5476ca185/manager/0.log" Jan 21 16:43:15 crc kubenswrapper[4834]: I0121 16:43:15.653910 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986ddp2nz_2097f21e-3f3a-435e-9003-15846c98efbd/manager/0.log" Jan 21 16:43:15 crc kubenswrapper[4834]: I0121 16:43:15.845373 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-pr4wt_c4e147a8-154c-4e8d-ad04-4a58a81b4942/operator/0.log" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.125820 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-d4x4p_ce4027ee-3ff6-4f48-8eee-cac190eac5f9/manager/0.log" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.289503 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xvgpp_365a1666-9c4a-4834-9ae1-e275ff9051b8/registry-server/0.log" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.324861 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:43:18 crc kubenswrapper[4834]: E0121 16:43:18.325270 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.398329 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-cv9xt_2e0a31fa-5416-4a94-a915-69f0561794bc/manager/0.log" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.435729 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-28wfl_e8086139-0fe1-4859-8e4c-94eea0dd6a18/manager/0.log" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.467260 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wnz6s_bca7db69-2696-4e85-96dc-7c9140549f9a/operator/0.log" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.494773 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-lt695_69493e6c-ad47-4a9e-aad4-b93cee5d4ac7/manager/0.log" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.637866 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-hlcxr_458dc6de-2219-419b-ab95-76a72a77a097/manager/0.log" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.646318 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-92r4m_5c2b9737-fefd-4db7-a60d-5343a8b44554/manager/0.log" Jan 21 16:43:18 crc kubenswrapper[4834]: I0121 16:43:18.696485 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-44bkf_941ce977-5cf7-49e2-b96d-8446fefa95cd/manager/0.log" Jan 21 16:43:21 crc kubenswrapper[4834]: I0121 16:43:21.211843 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5596r_d64b1f50-155f-44f0-b9ba-90e1e59fc1ce/control-plane-machine-set-operator/0.log" Jan 21 16:43:21 crc kubenswrapper[4834]: I0121 16:43:21.231674 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lgkd6_168c6706-629a-4de3-9010-4a6ad7fb1f60/kube-rbac-proxy/0.log" Jan 21 16:43:21 crc kubenswrapper[4834]: I0121 16:43:21.245346 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lgkd6_168c6706-629a-4de3-9010-4a6ad7fb1f60/machine-api-operator/0.log" Jan 21 16:43:30 crc kubenswrapper[4834]: I0121 16:43:30.325456 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:43:30 crc kubenswrapper[4834]: E0121 16:43:30.327891 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:43:43 crc kubenswrapper[4834]: I0121 16:43:43.325228 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:43:43 crc kubenswrapper[4834]: E0121 16:43:43.326066 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:43:54 crc kubenswrapper[4834]: I0121 16:43:54.331366 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:43:54 crc kubenswrapper[4834]: E0121 16:43:54.332081 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:44:05 crc kubenswrapper[4834]: I0121 16:44:05.325404 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:44:05 crc kubenswrapper[4834]: E0121 16:44:05.327001 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:44:17 crc kubenswrapper[4834]: I0121 16:44:17.324888 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:44:17 crc kubenswrapper[4834]: E0121 16:44:17.326828 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:44:29 crc kubenswrapper[4834]: I0121 16:44:29.325425 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:44:29 crc kubenswrapper[4834]: E0121 16:44:29.326441 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:44:41 crc kubenswrapper[4834]: I0121 16:44:41.325222 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:44:41 crc kubenswrapper[4834]: E0121 16:44:41.325910 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:44:55 crc kubenswrapper[4834]: I0121 16:44:55.326358 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:44:55 crc kubenswrapper[4834]: E0121 16:44:55.327407 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.167128 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz"] Jan 21 16:45:00 crc kubenswrapper[4834]: E0121 16:45:00.168180 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de786bc-1544-4ca5-9070-1c7aee9ca81b" containerName="container-00" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.168196 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de786bc-1544-4ca5-9070-1c7aee9ca81b" containerName="container-00" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.168462 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de786bc-1544-4ca5-9070-1c7aee9ca81b" containerName="container-00" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.169480 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.172418 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.182512 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.200426 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz"] Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.249066 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54xpm\" (UniqueName: \"kubernetes.io/projected/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-kube-api-access-54xpm\") pod \"collect-profiles-29483565-zd6dz\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.249169 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-config-volume\") pod \"collect-profiles-29483565-zd6dz\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.249198 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-secret-volume\") pod \"collect-profiles-29483565-zd6dz\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.351332 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54xpm\" (UniqueName: \"kubernetes.io/projected/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-kube-api-access-54xpm\") pod \"collect-profiles-29483565-zd6dz\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.351442 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-config-volume\") pod \"collect-profiles-29483565-zd6dz\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.351468 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-secret-volume\") pod \"collect-profiles-29483565-zd6dz\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.352527 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-config-volume\") pod \"collect-profiles-29483565-zd6dz\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.357411 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-secret-volume\") pod \"collect-profiles-29483565-zd6dz\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.373687 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54xpm\" (UniqueName: \"kubernetes.io/projected/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-kube-api-access-54xpm\") pod \"collect-profiles-29483565-zd6dz\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:00 crc kubenswrapper[4834]: I0121 16:45:00.498533 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:01 crc kubenswrapper[4834]: I0121 16:45:01.012882 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz"] Jan 21 16:45:01 crc kubenswrapper[4834]: W0121 16:45:01.016521 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75e424ff_e7e6_4116_a7ee_a7dc4209d7e3.slice/crio-1897f5390c620bf10f31e2956f4ed93537a6e524e2680915c2f54fa3bb1f9387 WatchSource:0}: Error finding container 1897f5390c620bf10f31e2956f4ed93537a6e524e2680915c2f54fa3bb1f9387: Status 404 returned error can't find the container with id 1897f5390c620bf10f31e2956f4ed93537a6e524e2680915c2f54fa3bb1f9387 Jan 21 16:45:01 crc kubenswrapper[4834]: I0121 16:45:01.464807 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" event={"ID":"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3","Type":"ContainerStarted","Data":"9a0ddbb519aaf8915e4e2627f704be4c6ece8f7180f4100fd1211b1b4dde2997"} Jan 21 16:45:01 crc kubenswrapper[4834]: I0121 16:45:01.464868 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" event={"ID":"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3","Type":"ContainerStarted","Data":"1897f5390c620bf10f31e2956f4ed93537a6e524e2680915c2f54fa3bb1f9387"} Jan 21 16:45:01 crc kubenswrapper[4834]: I0121 16:45:01.502481 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" podStartSLOduration=1.502452524 podStartE2EDuration="1.502452524s" podCreationTimestamp="2026-01-21 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:45:01.494712241 +0000 UTC m=+8047.469061286" watchObservedRunningTime="2026-01-21 16:45:01.502452524 +0000 UTC m=+8047.476801569" Jan 21 16:45:02 crc kubenswrapper[4834]: I0121 16:45:02.477490 4834 generic.go:334] "Generic (PLEG): container finished" podID="75e424ff-e7e6-4116-a7ee-a7dc4209d7e3" containerID="9a0ddbb519aaf8915e4e2627f704be4c6ece8f7180f4100fd1211b1b4dde2997" exitCode=0 Jan 21 16:45:02 crc kubenswrapper[4834]: I0121 16:45:02.477839 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" event={"ID":"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3","Type":"ContainerDied","Data":"9a0ddbb519aaf8915e4e2627f704be4c6ece8f7180f4100fd1211b1b4dde2997"} Jan 21 16:45:03 crc kubenswrapper[4834]: I0121 16:45:03.854988 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:03 crc kubenswrapper[4834]: I0121 16:45:03.938337 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-secret-volume\") pod \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " Jan 21 16:45:03 crc kubenswrapper[4834]: I0121 16:45:03.938475 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54xpm\" (UniqueName: \"kubernetes.io/projected/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-kube-api-access-54xpm\") pod \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " Jan 21 16:45:03 crc kubenswrapper[4834]: I0121 16:45:03.938583 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-config-volume\") pod \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\" (UID: \"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3\") " Jan 21 16:45:03 crc kubenswrapper[4834]: I0121 16:45:03.939820 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "75e424ff-e7e6-4116-a7ee-a7dc4209d7e3" (UID: "75e424ff-e7e6-4116-a7ee-a7dc4209d7e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4834]: I0121 16:45:03.945220 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-kube-api-access-54xpm" (OuterVolumeSpecName: "kube-api-access-54xpm") pod "75e424ff-e7e6-4116-a7ee-a7dc4209d7e3" (UID: "75e424ff-e7e6-4116-a7ee-a7dc4209d7e3"). InnerVolumeSpecName "kube-api-access-54xpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4834]: I0121 16:45:03.945834 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75e424ff-e7e6-4116-a7ee-a7dc4209d7e3" (UID: "75e424ff-e7e6-4116-a7ee-a7dc4209d7e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:45:04 crc kubenswrapper[4834]: I0121 16:45:04.041141 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:04 crc kubenswrapper[4834]: I0121 16:45:04.041200 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:04 crc kubenswrapper[4834]: I0121 16:45:04.041217 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54xpm\" (UniqueName: \"kubernetes.io/projected/75e424ff-e7e6-4116-a7ee-a7dc4209d7e3-kube-api-access-54xpm\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:04 crc kubenswrapper[4834]: I0121 16:45:04.502329 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" event={"ID":"75e424ff-e7e6-4116-a7ee-a7dc4209d7e3","Type":"ContainerDied","Data":"1897f5390c620bf10f31e2956f4ed93537a6e524e2680915c2f54fa3bb1f9387"} Jan 21 16:45:04 crc kubenswrapper[4834]: I0121 16:45:04.502846 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1897f5390c620bf10f31e2956f4ed93537a6e524e2680915c2f54fa3bb1f9387" Jan 21 16:45:04 crc kubenswrapper[4834]: I0121 16:45:04.502379 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-zd6dz" Jan 21 16:45:04 crc kubenswrapper[4834]: I0121 16:45:04.585124 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l"] Jan 21 16:45:04 crc kubenswrapper[4834]: I0121 16:45:04.595782 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-dws5l"] Jan 21 16:45:06 crc kubenswrapper[4834]: I0121 16:45:06.339905 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29a68c6-c05b-4db2-aa94-52932db05d0b" path="/var/lib/kubelet/pods/c29a68c6-c05b-4db2-aa94-52932db05d0b/volumes" Jan 21 16:45:10 crc kubenswrapper[4834]: I0121 16:45:10.324705 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:45:10 crc kubenswrapper[4834]: E0121 16:45:10.325879 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:45:15 crc kubenswrapper[4834]: I0121 16:45:15.180771 4834 scope.go:117] "RemoveContainer" containerID="07f4a4d5045f5863709cbb7e00d827610d53702f8e0c223c48eba0ed37daea14" Jan 21 16:45:24 crc kubenswrapper[4834]: I0121 16:45:24.332417 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:45:24 crc kubenswrapper[4834]: I0121 16:45:24.705484 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"688a6646fe124e2a90170883fc06ecfd7a53fc5f7bd2d7430cc910c492dbea1c"} Jan 21 16:45:26 crc kubenswrapper[4834]: I0121 16:45:26.826373 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-bq6sg_a6b51264-73aa-41b5-ac58-8adc362ca2c5/cert-manager-controller/0.log" Jan 21 16:45:26 crc kubenswrapper[4834]: I0121 16:45:26.850566 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-28mgw_8c8bf664-44d3-4476-8b5c-1ed1f755fb6a/cert-manager-cainjector/0.log" Jan 21 16:45:26 crc kubenswrapper[4834]: I0121 16:45:26.860746 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-6fw8m_9d45a000-2985-4340-a80c-533709ceed95/cert-manager-webhook/0.log" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.716598 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p2nlk"] Jan 21 16:45:32 crc kubenswrapper[4834]: E0121 16:45:32.717792 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e424ff-e7e6-4116-a7ee-a7dc4209d7e3" containerName="collect-profiles" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.717811 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e424ff-e7e6-4116-a7ee-a7dc4209d7e3" containerName="collect-profiles" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.718115 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e424ff-e7e6-4116-a7ee-a7dc4209d7e3" containerName="collect-profiles" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.720517 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.733690 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2nlk"] Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.795980 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-jnt5z_c74043b9-a7ba-40e4-9263-2e093fe9e7a6/nmstate-console-plugin/0.log" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.824847 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-catalog-content\") pod \"certified-operators-p2nlk\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.825056 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-utilities\") pod \"certified-operators-p2nlk\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.825518 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlf2n\" (UniqueName: \"kubernetes.io/projected/e91140b0-26f7-4113-84dd-8df42f932446-kube-api-access-mlf2n\") pod \"certified-operators-p2nlk\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.827841 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2hj92_9ea294a4-02f7-4dcc-9127-12ed01d12b40/nmstate-handler/0.log" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.848793 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b69n7_f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc/nmstate-metrics/0.log" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.868113 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b69n7_f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc/kube-rbac-proxy/0.log" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.883110 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-fv8ww_f2d9a779-b241-41cd-b261-9f437b8cac1f/nmstate-operator/0.log" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.897217 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-dkwnw_dc2d6e5b-0933-409b-8934-cec8c98f5f7a/nmstate-webhook/0.log" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.929046 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlf2n\" (UniqueName: \"kubernetes.io/projected/e91140b0-26f7-4113-84dd-8df42f932446-kube-api-access-mlf2n\") pod \"certified-operators-p2nlk\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.929220 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-catalog-content\") pod \"certified-operators-p2nlk\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.929294 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-utilities\") pod \"certified-operators-p2nlk\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.929889 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-utilities\") pod \"certified-operators-p2nlk\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.929944 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-catalog-content\") pod \"certified-operators-p2nlk\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:32 crc kubenswrapper[4834]: I0121 16:45:32.970091 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlf2n\" (UniqueName: \"kubernetes.io/projected/e91140b0-26f7-4113-84dd-8df42f932446-kube-api-access-mlf2n\") pod \"certified-operators-p2nlk\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:33 crc kubenswrapper[4834]: I0121 16:45:33.051993 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:33 crc kubenswrapper[4834]: I0121 16:45:33.663203 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2nlk"] Jan 21 16:45:33 crc kubenswrapper[4834]: I0121 16:45:33.814233 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2nlk" event={"ID":"e91140b0-26f7-4113-84dd-8df42f932446","Type":"ContainerStarted","Data":"5c62ec25ce640b87396c56bec82930c46cfe58abe96e5edf02c7b8b373b6d423"} Jan 21 16:45:34 crc kubenswrapper[4834]: I0121 16:45:34.825783 4834 generic.go:334] "Generic (PLEG): container finished" podID="e91140b0-26f7-4113-84dd-8df42f932446" containerID="9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da" exitCode=0 Jan 21 16:45:34 crc kubenswrapper[4834]: I0121 16:45:34.825884 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2nlk" event={"ID":"e91140b0-26f7-4113-84dd-8df42f932446","Type":"ContainerDied","Data":"9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da"} Jan 21 16:45:34 crc kubenswrapper[4834]: I0121 16:45:34.830108 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:45:36 crc kubenswrapper[4834]: I0121 16:45:36.848364 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2nlk" event={"ID":"e91140b0-26f7-4113-84dd-8df42f932446","Type":"ContainerStarted","Data":"5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e"} Jan 21 16:45:37 crc kubenswrapper[4834]: I0121 16:45:37.859254 4834 generic.go:334] "Generic (PLEG): container finished" podID="e91140b0-26f7-4113-84dd-8df42f932446" containerID="5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e" exitCode=0 Jan 21 16:45:37 crc kubenswrapper[4834]: I0121 16:45:37.859344 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2nlk" event={"ID":"e91140b0-26f7-4113-84dd-8df42f932446","Type":"ContainerDied","Data":"5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e"} Jan 21 16:45:38 crc kubenswrapper[4834]: I0121 16:45:38.877109 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2nlk" event={"ID":"e91140b0-26f7-4113-84dd-8df42f932446","Type":"ContainerStarted","Data":"9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367"} Jan 21 16:45:38 crc kubenswrapper[4834]: I0121 16:45:38.902171 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p2nlk" podStartSLOduration=3.392012887 podStartE2EDuration="6.902141906s" podCreationTimestamp="2026-01-21 16:45:32 +0000 UTC" firstStartedPulling="2026-01-21 16:45:34.829693855 +0000 UTC m=+8080.804042900" lastFinishedPulling="2026-01-21 16:45:38.339822874 +0000 UTC m=+8084.314171919" observedRunningTime="2026-01-21 16:45:38.899251205 +0000 UTC m=+8084.873600250" watchObservedRunningTime="2026-01-21 16:45:38.902141906 +0000 UTC m=+8084.876490951" Jan 21 16:45:40 crc kubenswrapper[4834]: I0121 16:45:40.271891 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-7psl7_89f9bbd9-52c4-4da9-940f-1c6b73caf38a/prometheus-operator/0.log" Jan 21 16:45:40 crc kubenswrapper[4834]: I0121 16:45:40.288357 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8674496d68-649n8_4cbdb85b-45e6-4a13-b640-606a0c1d0ebc/prometheus-operator-admission-webhook/0.log" Jan 21 16:45:40 crc kubenswrapper[4834]: I0121 16:45:40.302074 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8674496d68-bmxdh_62724f29-4a17-4c89-85aa-060935c8c462/prometheus-operator-admission-webhook/0.log" Jan 21 16:45:40 crc kubenswrapper[4834]: I0121 16:45:40.330215 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lwpmg_30071fcc-7598-4f5d-94a4-af2abfcc9ed3/operator/0.log" Jan 21 16:45:40 crc kubenswrapper[4834]: I0121 16:45:40.344725 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-45x98_a53ef277-6aa4-4cdc-b098-4cd5ac373b0e/perses-operator/0.log" Jan 21 16:45:43 crc kubenswrapper[4834]: I0121 16:45:43.053004 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:43 crc kubenswrapper[4834]: I0121 16:45:43.053419 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:43 crc kubenswrapper[4834]: I0121 16:45:43.105088 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:43 crc kubenswrapper[4834]: I0121 16:45:43.993458 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:44 crc kubenswrapper[4834]: I0121 16:45:44.050186 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2nlk"] Jan 21 16:45:45 crc kubenswrapper[4834]: I0121 16:45:45.958667 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p2nlk" podUID="e91140b0-26f7-4113-84dd-8df42f932446" containerName="registry-server" containerID="cri-o://9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367" gracePeriod=2 Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.576736 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.587516 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4qs4k_fdcc3bcf-ed3c-42b3-aaed-02aedc639655/controller/0.log" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.599737 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4qs4k_fdcc3bcf-ed3c-42b3-aaed-02aedc639655/kube-rbac-proxy/0.log" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.639480 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/controller/0.log" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.674235 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-utilities\") pod \"e91140b0-26f7-4113-84dd-8df42f932446\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.674301 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlf2n\" (UniqueName: \"kubernetes.io/projected/e91140b0-26f7-4113-84dd-8df42f932446-kube-api-access-mlf2n\") pod \"e91140b0-26f7-4113-84dd-8df42f932446\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.674486 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-catalog-content\") pod \"e91140b0-26f7-4113-84dd-8df42f932446\" (UID: \"e91140b0-26f7-4113-84dd-8df42f932446\") " Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.675633 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-utilities" (OuterVolumeSpecName: "utilities") pod "e91140b0-26f7-4113-84dd-8df42f932446" (UID: "e91140b0-26f7-4113-84dd-8df42f932446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.691739 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91140b0-26f7-4113-84dd-8df42f932446-kube-api-access-mlf2n" (OuterVolumeSpecName: "kube-api-access-mlf2n") pod "e91140b0-26f7-4113-84dd-8df42f932446" (UID: "e91140b0-26f7-4113-84dd-8df42f932446"). InnerVolumeSpecName "kube-api-access-mlf2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.741880 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e91140b0-26f7-4113-84dd-8df42f932446" (UID: "e91140b0-26f7-4113-84dd-8df42f932446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.778785 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.778826 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91140b0-26f7-4113-84dd-8df42f932446-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.778837 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlf2n\" (UniqueName: \"kubernetes.io/projected/e91140b0-26f7-4113-84dd-8df42f932446-kube-api-access-mlf2n\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.978278 4834 generic.go:334] "Generic (PLEG): container finished" podID="e91140b0-26f7-4113-84dd-8df42f932446" containerID="9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367" exitCode=0 Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.978870 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2nlk" Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.979112 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2nlk" event={"ID":"e91140b0-26f7-4113-84dd-8df42f932446","Type":"ContainerDied","Data":"9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367"} Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.979164 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2nlk" event={"ID":"e91140b0-26f7-4113-84dd-8df42f932446","Type":"ContainerDied","Data":"5c62ec25ce640b87396c56bec82930c46cfe58abe96e5edf02c7b8b373b6d423"} Jan 21 16:45:46 crc kubenswrapper[4834]: I0121 16:45:46.979188 4834 scope.go:117] "RemoveContainer" containerID="9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367" Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.021573 4834 scope.go:117] "RemoveContainer" containerID="5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e" Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.039497 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2nlk"] Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.056154 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p2nlk"] Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.080340 4834 scope.go:117] "RemoveContainer" containerID="9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da" Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.126690 4834 scope.go:117] "RemoveContainer" containerID="9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367" Jan 21 16:45:47 crc kubenswrapper[4834]: E0121 16:45:47.127509 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367\": container with ID starting with 9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367 not found: ID does not exist" containerID="9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367" Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.127558 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367"} err="failed to get container status \"9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367\": rpc error: code = NotFound desc = could not find container \"9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367\": container with ID starting with 9af32bae98673c98ededd8973bcfbc9e0bbfc39841842832231a7dea19b1e367 not found: ID does not exist" Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.127586 4834 scope.go:117] "RemoveContainer" containerID="5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e" Jan 21 16:45:47 crc kubenswrapper[4834]: E0121 16:45:47.128198 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e\": container with ID starting with 5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e not found: ID does not exist" containerID="5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e" Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.128231 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e"} err="failed to get container status \"5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e\": rpc error: code = NotFound desc = could not find container \"5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e\": container with ID starting with 5d28ae2887c2bbb1a8dfe936f09c0ba16a7e9523a90bf125786bfb754f2af37e not found: ID does not exist" Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.128251 4834 scope.go:117] "RemoveContainer" containerID="9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da" Jan 21 16:45:47 crc kubenswrapper[4834]: E0121 16:45:47.128625 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da\": container with ID starting with 9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da not found: ID does not exist" containerID="9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da" Jan 21 16:45:47 crc kubenswrapper[4834]: I0121 16:45:47.128651 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da"} err="failed to get container status \"9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da\": rpc error: code = NotFound desc = could not find container \"9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da\": container with ID starting with 9fc94b3fdd3b7e8e788f2c90b1e774caa495e657b7d96e74fdc79342d40a28da not found: ID does not exist" Jan 21 16:45:48 crc kubenswrapper[4834]: I0121 16:45:48.356636 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91140b0-26f7-4113-84dd-8df42f932446" path="/var/lib/kubelet/pods/e91140b0-26f7-4113-84dd-8df42f932446/volumes" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.068399 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/frr/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.084667 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/reloader/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.091491 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/frr-metrics/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.103546 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/kube-rbac-proxy/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.117160 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/kube-rbac-proxy-frr/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.127272 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/cp-frr-files/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.140465 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/cp-reloader/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.155709 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/cp-metrics/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.172370 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8g2fp_2dc8a2ee-3729-4765-86aa-4f9b89a00c79/frr-k8s-webhook-server/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.202043 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-9f5f6b6d-8k84x_2e030c1c-2b95-4ea1-a9be-91a707b92e15/manager/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.251028 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-794b596549-wm7g7_09ca80fd-b29f-4eca-8ad2-28f29cb91e78/webhook-server/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.909438 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jqkhx_fa1a9082-c606-4634-9715-1b81c9f0137f/speaker/0.log" Jan 21 16:45:50 crc kubenswrapper[4834]: I0121 16:45:50.919116 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jqkhx_fa1a9082-c606-4634-9715-1b81c9f0137f/kube-rbac-proxy/0.log" Jan 21 16:45:52 crc kubenswrapper[4834]: I0121 16:45:52.893514 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb_b7d7a5da-2462-4992-9837-9cb1e54ea157/extract/0.log" Jan 21 16:45:52 crc kubenswrapper[4834]: I0121 16:45:52.904238 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb_b7d7a5da-2462-4992-9837-9cb1e54ea157/util/0.log" Jan 21 16:45:52 crc kubenswrapper[4834]: I0121 16:45:52.937914 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931abmdwb_b7d7a5da-2462-4992-9837-9cb1e54ea157/pull/0.log" Jan 21 16:45:52 crc kubenswrapper[4834]: I0121 16:45:52.951665 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8_f50d9192-f6ea-4d53-b1fc-ce8650f422ba/extract/0.log" Jan 21 16:45:52 crc kubenswrapper[4834]: I0121 16:45:52.961499 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8_f50d9192-f6ea-4d53-b1fc-ce8650f422ba/util/0.log" Jan 21 16:45:52 crc kubenswrapper[4834]: I0121 16:45:52.973228 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7l2c8_f50d9192-f6ea-4d53-b1fc-ce8650f422ba/pull/0.log" Jan 21 16:45:53 crc kubenswrapper[4834]: I0121 16:45:53.002239 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh_a64a71e4-a7d7-4267-978c-48140c262706/extract/0.log" Jan 21 16:45:53 crc kubenswrapper[4834]: I0121 16:45:53.008109 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh_a64a71e4-a7d7-4267-978c-48140c262706/util/0.log" Jan 21 16:45:53 crc kubenswrapper[4834]: I0121 16:45:53.018710 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713qcpwh_a64a71e4-a7d7-4267-978c-48140c262706/pull/0.log" Jan 21 16:45:53 crc kubenswrapper[4834]: I0121 16:45:53.033653 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq_2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63/extract/0.log" Jan 21 16:45:53 crc kubenswrapper[4834]: I0121 16:45:53.041058 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq_2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63/util/0.log" Jan 21 16:45:53 crc kubenswrapper[4834]: I0121 16:45:53.057256 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08kjbcq_2ee4ae70-6fb5-4656-b1c6-f311a8f4ad63/pull/0.log" Jan 21 16:45:53 crc kubenswrapper[4834]: I0121 16:45:53.836674 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9zgs7_5a591efb-81ec-493d-bf9c-40c1dc4cac3d/registry-server/0.log" Jan 21 16:45:53 crc kubenswrapper[4834]: I0121 16:45:53.849508 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9zgs7_5a591efb-81ec-493d-bf9c-40c1dc4cac3d/extract-utilities/0.log" Jan 21 16:45:53 crc kubenswrapper[4834]: I0121 16:45:53.863557 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9zgs7_5a591efb-81ec-493d-bf9c-40c1dc4cac3d/extract-content/0.log" Jan 21 16:45:54 crc kubenswrapper[4834]: I0121 16:45:54.882039 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j79vm_8b906389-a061-4191-a46e-b0950bacd6ff/registry-server/0.log" Jan 21 16:45:54 crc kubenswrapper[4834]: I0121 16:45:54.888269 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j79vm_8b906389-a061-4191-a46e-b0950bacd6ff/extract-utilities/0.log" Jan 21 16:45:54 crc kubenswrapper[4834]: I0121 16:45:54.896955 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j79vm_8b906389-a061-4191-a46e-b0950bacd6ff/extract-content/0.log" Jan 21 16:45:54 crc kubenswrapper[4834]: I0121 16:45:54.931138 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lt7k4_494aef4f-fcf4-422a-be16-b39449045941/marketplace-operator/0.log" Jan 21 16:45:55 crc kubenswrapper[4834]: I0121 16:45:55.332198 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8bbsd_d3416206-7706-4072-af2f-e5bb2606aef0/registry-server/0.log" Jan 21 16:45:55 crc kubenswrapper[4834]: I0121 16:45:55.339828 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8bbsd_d3416206-7706-4072-af2f-e5bb2606aef0/extract-utilities/0.log" Jan 21 16:45:55 crc kubenswrapper[4834]: I0121 16:45:55.360331 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8bbsd_d3416206-7706-4072-af2f-e5bb2606aef0/extract-content/0.log" Jan 21 16:45:56 crc kubenswrapper[4834]: I0121 16:45:56.657761 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p7g4k_81c693e3-bcc4-4d1f-80d4-cf7aed592bc7/registry-server/0.log" Jan 21 16:45:56 crc kubenswrapper[4834]: I0121 16:45:56.668238 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p7g4k_81c693e3-bcc4-4d1f-80d4-cf7aed592bc7/extract-utilities/0.log" Jan 21 16:45:56 crc kubenswrapper[4834]: I0121 16:45:56.681147 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p7g4k_81c693e3-bcc4-4d1f-80d4-cf7aed592bc7/extract-content/0.log" Jan 21 16:45:59 crc kubenswrapper[4834]: I0121 16:45:59.161453 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-7psl7_89f9bbd9-52c4-4da9-940f-1c6b73caf38a/prometheus-operator/0.log" Jan 21 16:45:59 crc kubenswrapper[4834]: I0121 16:45:59.176942 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8674496d68-649n8_4cbdb85b-45e6-4a13-b640-606a0c1d0ebc/prometheus-operator-admission-webhook/0.log" Jan 21 16:45:59 crc kubenswrapper[4834]: I0121 16:45:59.188395 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8674496d68-bmxdh_62724f29-4a17-4c89-85aa-060935c8c462/prometheus-operator-admission-webhook/0.log" Jan 21 16:45:59 crc kubenswrapper[4834]: I0121 16:45:59.210169 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lwpmg_30071fcc-7598-4f5d-94a4-af2abfcc9ed3/operator/0.log" Jan 21 16:45:59 crc kubenswrapper[4834]: I0121 16:45:59.220855 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-45x98_a53ef277-6aa4-4cdc-b098-4cd5ac373b0e/perses-operator/0.log" Jan 21 16:47:47 crc kubenswrapper[4834]: I0121 16:47:47.114367 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:47:47 crc kubenswrapper[4834]: I0121 16:47:47.114873 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:48 crc kubenswrapper[4834]: I0121 16:47:48.898237 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-7psl7_89f9bbd9-52c4-4da9-940f-1c6b73caf38a/prometheus-operator/0.log" Jan 21 16:47:48 crc kubenswrapper[4834]: I0121 16:47:48.922745 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8674496d68-649n8_4cbdb85b-45e6-4a13-b640-606a0c1d0ebc/prometheus-operator-admission-webhook/0.log" Jan 21 16:47:48 crc kubenswrapper[4834]: I0121 16:47:48.944212 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-8674496d68-bmxdh_62724f29-4a17-4c89-85aa-060935c8c462/prometheus-operator-admission-webhook/0.log" Jan 21 16:47:48 crc kubenswrapper[4834]: I0121 16:47:48.967523 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-lwpmg_30071fcc-7598-4f5d-94a4-af2abfcc9ed3/operator/0.log" Jan 21 16:47:48 crc kubenswrapper[4834]: I0121 16:47:48.991376 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-45x98_a53ef277-6aa4-4cdc-b098-4cd5ac373b0e/perses-operator/0.log" Jan 21 16:47:49 crc kubenswrapper[4834]: I0121 16:47:49.215083 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-bq6sg_a6b51264-73aa-41b5-ac58-8adc362ca2c5/cert-manager-controller/0.log" Jan 21 16:47:49 crc kubenswrapper[4834]: I0121 16:47:49.236852 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-28mgw_8c8bf664-44d3-4476-8b5c-1ed1f755fb6a/cert-manager-cainjector/0.log" Jan 21 16:47:49 crc kubenswrapper[4834]: I0121 16:47:49.253924 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-6fw8m_9d45a000-2985-4340-a80c-533709ceed95/cert-manager-webhook/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.326330 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv_f23fb8fd-32c6-45d6-a0d8-97555eccacc6/extract/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.340631 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv_f23fb8fd-32c6-45d6-a0d8-97555eccacc6/util/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.348433 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv_f23fb8fd-32c6-45d6-a0d8-97555eccacc6/pull/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.473618 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4qs4k_fdcc3bcf-ed3c-42b3-aaed-02aedc639655/controller/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.483081 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4qs4k_fdcc3bcf-ed3c-42b3-aaed-02aedc639655/kube-rbac-proxy/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.514112 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/controller/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.518970 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-v7r86_ac1f15b1-9c34-49e4-957a-74a950b6583f/manager/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.644082 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-qdt7x_00cc8ba5-f3f5-42e9-a23a-6c3b1989763b/manager/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.661048 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dfmb5_2bf8d583-f505-436a-a60f-ec418f4d5e94/manager/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.901751 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-x56p7_dded3929-2919-4903-8465-da99004a3cd6/manager/0.log" Jan 21 16:47:50 crc kubenswrapper[4834]: I0121 16:47:50.984485 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-2dj64_ae8ff7f9-4d1b-4562-a307-f9ad95966c48/manager/0.log" Jan 21 16:47:51 crc kubenswrapper[4834]: I0121 16:47:51.010157 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-jvfvr_c181377f-ebc2-4ebf-ba81-71d6609d37c4/manager/0.log" Jan 21 16:47:51 crc kubenswrapper[4834]: I0121 16:47:51.785655 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-bwn92_8b4d1ded-fcb3-456e-9704-776079ec120f/manager/0.log" Jan 21 16:47:51 crc kubenswrapper[4834]: I0121 16:47:51.803316 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-d5jml_4d010ccc-9dc9-4d66-9544-354bc82380ca/manager/0.log" Jan 21 16:47:52 crc kubenswrapper[4834]: I0121 16:47:52.004950 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-lczqr_aa317ccb-4317-40f7-8661-20a62f36dd97/manager/0.log" Jan 21 16:47:52 crc kubenswrapper[4834]: I0121 16:47:52.060827 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-qmndz_f3592517-3af0-41fd-bd16-da41fb583656/manager/0.log" Jan 21 16:47:52 crc kubenswrapper[4834]: I0121 16:47:52.162438 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-bdrss_98485c62-7908-4ca9-8436-71dd52f371df/manager/0.log" Jan 21 16:47:52 crc kubenswrapper[4834]: I0121 16:47:52.271225 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-nsvnj_d2a8e7a2-c785-48e3-a143-30c89c49fe36/manager/0.log" Jan 21 16:47:52 crc kubenswrapper[4834]: I0121 16:47:52.562025 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-7ww2d_8728d4cf-ef2f-4e71-875f-6227ac7117db/manager/0.log" Jan 21 16:47:52 crc kubenswrapper[4834]: I0121 16:47:52.635552 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-whz74_ff5898ae-a2fa-4ed4-9e56-74a5476ca185/manager/0.log" Jan 21 16:47:52 crc kubenswrapper[4834]: I0121 16:47:52.653193 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986ddp2nz_2097f21e-3f3a-435e-9003-15846c98efbd/manager/0.log" Jan 21 16:47:52 crc kubenswrapper[4834]: I0121 16:47:52.937126 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-pr4wt_c4e147a8-154c-4e8d-ad04-4a58a81b4942/operator/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.502334 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/frr/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.515537 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/reloader/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.522066 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/frr-metrics/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.532327 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/kube-rbac-proxy/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.541526 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/kube-rbac-proxy-frr/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.551606 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/cp-frr-files/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.562733 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/cp-reloader/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.569369 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nmfb6_872f6769-1a60-42d1-911d-0db9cfba03ce/cp-metrics/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.582997 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8g2fp_2dc8a2ee-3729-4765-86aa-4f9b89a00c79/frr-k8s-webhook-server/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.617181 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-9f5f6b6d-8k84x_2e030c1c-2b95-4ea1-a9be-91a707b92e15/manager/0.log" Jan 21 16:47:55 crc kubenswrapper[4834]: I0121 16:47:55.628917 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-794b596549-wm7g7_09ca80fd-b29f-4eca-8ad2-28f29cb91e78/webhook-server/0.log" Jan 21 16:47:56 crc kubenswrapper[4834]: I0121 16:47:56.511018 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-d4x4p_ce4027ee-3ff6-4f48-8eee-cac190eac5f9/manager/0.log" Jan 21 16:47:56 crc kubenswrapper[4834]: I0121 16:47:56.615038 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jqkhx_fa1a9082-c606-4634-9715-1b81c9f0137f/speaker/0.log" Jan 21 16:47:56 crc kubenswrapper[4834]: I0121 16:47:56.627885 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jqkhx_fa1a9082-c606-4634-9715-1b81c9f0137f/kube-rbac-proxy/0.log" Jan 21 16:47:56 crc kubenswrapper[4834]: I0121 16:47:56.677679 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xvgpp_365a1666-9c4a-4834-9ae1-e275ff9051b8/registry-server/0.log" Jan 21 16:47:56 crc kubenswrapper[4834]: I0121 16:47:56.780670 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-cv9xt_2e0a31fa-5416-4a94-a915-69f0561794bc/manager/0.log" Jan 21 16:47:56 crc kubenswrapper[4834]: I0121 16:47:56.820714 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-28wfl_e8086139-0fe1-4859-8e4c-94eea0dd6a18/manager/0.log" Jan 21 16:47:56 crc kubenswrapper[4834]: I0121 16:47:56.850854 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wnz6s_bca7db69-2696-4e85-96dc-7c9140549f9a/operator/0.log" Jan 21 16:47:56 crc kubenswrapper[4834]: I0121 16:47:56.887681 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-lt695_69493e6c-ad47-4a9e-aad4-b93cee5d4ac7/manager/0.log" Jan 21 16:47:57 crc kubenswrapper[4834]: I0121 16:47:57.046511 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-hlcxr_458dc6de-2219-419b-ab95-76a72a77a097/manager/0.log" Jan 21 16:47:57 crc kubenswrapper[4834]: I0121 16:47:57.057146 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-92r4m_5c2b9737-fefd-4db7-a60d-5343a8b44554/manager/0.log" Jan 21 16:47:57 crc kubenswrapper[4834]: I0121 16:47:57.071705 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-44bkf_941ce977-5cf7-49e2-b96d-8446fefa95cd/manager/0.log" Jan 21 16:47:57 crc kubenswrapper[4834]: I0121 16:47:57.622430 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-bq6sg_a6b51264-73aa-41b5-ac58-8adc362ca2c5/cert-manager-controller/0.log" Jan 21 16:47:57 crc kubenswrapper[4834]: I0121 16:47:57.641847 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-28mgw_8c8bf664-44d3-4476-8b5c-1ed1f755fb6a/cert-manager-cainjector/0.log" Jan 21 16:47:57 crc kubenswrapper[4834]: I0121 16:47:57.651307 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-6fw8m_9d45a000-2985-4340-a80c-533709ceed95/cert-manager-webhook/0.log" Jan 21 16:47:58 crc kubenswrapper[4834]: I0121 16:47:58.317289 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5596r_d64b1f50-155f-44f0-b9ba-90e1e59fc1ce/control-plane-machine-set-operator/0.log" Jan 21 16:47:58 crc kubenswrapper[4834]: I0121 16:47:58.346618 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lgkd6_168c6706-629a-4de3-9010-4a6ad7fb1f60/kube-rbac-proxy/0.log" Jan 21 16:47:58 crc kubenswrapper[4834]: I0121 16:47:58.355590 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lgkd6_168c6706-629a-4de3-9010-4a6ad7fb1f60/machine-api-operator/0.log" Jan 21 16:47:58 crc kubenswrapper[4834]: I0121 16:47:58.540376 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-jnt5z_c74043b9-a7ba-40e4-9263-2e093fe9e7a6/nmstate-console-plugin/0.log" Jan 21 16:47:58 crc kubenswrapper[4834]: I0121 16:47:58.561762 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2hj92_9ea294a4-02f7-4dcc-9127-12ed01d12b40/nmstate-handler/0.log" Jan 21 16:47:58 crc kubenswrapper[4834]: I0121 16:47:58.585775 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b69n7_f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc/nmstate-metrics/0.log" Jan 21 16:47:58 crc kubenswrapper[4834]: I0121 16:47:58.603545 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b69n7_f8f4ec9e-62a3-4c4a-b1e0-5d8a5a209ffc/kube-rbac-proxy/0.log" Jan 21 16:47:58 crc kubenswrapper[4834]: I0121 16:47:58.658440 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-fv8ww_f2d9a779-b241-41cd-b261-9f437b8cac1f/nmstate-operator/0.log" Jan 21 16:47:58 crc kubenswrapper[4834]: I0121 16:47:58.871766 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-dkwnw_dc2d6e5b-0933-409b-8934-cec8c98f5f7a/nmstate-webhook/0.log" Jan 21 16:47:59 crc kubenswrapper[4834]: I0121 16:47:59.462342 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv_f23fb8fd-32c6-45d6-a0d8-97555eccacc6/extract/0.log" Jan 21 16:47:59 crc kubenswrapper[4834]: I0121 16:47:59.475000 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv_f23fb8fd-32c6-45d6-a0d8-97555eccacc6/util/0.log" Jan 21 16:47:59 crc kubenswrapper[4834]: I0121 16:47:59.484112 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7f8269a825e737cb1f2e67fcbeccb826d8bfc6ea337cf3db10b8143e2exphpv_f23fb8fd-32c6-45d6-a0d8-97555eccacc6/pull/0.log" Jan 21 16:47:59 crc kubenswrapper[4834]: I0121 16:47:59.630806 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-v7r86_ac1f15b1-9c34-49e4-957a-74a950b6583f/manager/0.log" Jan 21 16:47:59 crc kubenswrapper[4834]: I0121 16:47:59.720314 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-qdt7x_00cc8ba5-f3f5-42e9-a23a-6c3b1989763b/manager/0.log" Jan 21 16:47:59 crc kubenswrapper[4834]: I0121 16:47:59.734517 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dfmb5_2bf8d583-f505-436a-a60f-ec418f4d5e94/manager/0.log" Jan 21 16:47:59 crc kubenswrapper[4834]: I0121 16:47:59.885906 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-x56p7_dded3929-2919-4903-8465-da99004a3cd6/manager/0.log" Jan 21 16:47:59 crc kubenswrapper[4834]: I0121 16:47:59.937424 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-2dj64_ae8ff7f9-4d1b-4562-a307-f9ad95966c48/manager/0.log" Jan 21 16:47:59 crc kubenswrapper[4834]: I0121 16:47:59.955904 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-jvfvr_c181377f-ebc2-4ebf-ba81-71d6609d37c4/manager/0.log" Jan 21 16:48:00 crc kubenswrapper[4834]: I0121 16:48:00.631744 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-bwn92_8b4d1ded-fcb3-456e-9704-776079ec120f/manager/0.log" Jan 21 16:48:00 crc kubenswrapper[4834]: I0121 16:48:00.650220 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-d5jml_4d010ccc-9dc9-4d66-9544-354bc82380ca/manager/0.log" Jan 21 16:48:00 crc kubenswrapper[4834]: I0121 16:48:00.808785 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-lczqr_aa317ccb-4317-40f7-8661-20a62f36dd97/manager/0.log" Jan 21 16:48:00 crc kubenswrapper[4834]: I0121 16:48:00.859246 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-qmndz_f3592517-3af0-41fd-bd16-da41fb583656/manager/0.log" Jan 21 16:48:00 crc kubenswrapper[4834]: I0121 16:48:00.943097 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-bdrss_98485c62-7908-4ca9-8436-71dd52f371df/manager/0.log" Jan 21 16:48:01 crc kubenswrapper[4834]: I0121 16:48:01.036900 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-nsvnj_d2a8e7a2-c785-48e3-a143-30c89c49fe36/manager/0.log" Jan 21 16:48:01 crc kubenswrapper[4834]: I0121 16:48:01.245686 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-7ww2d_8728d4cf-ef2f-4e71-875f-6227ac7117db/manager/0.log" Jan 21 16:48:01 crc kubenswrapper[4834]: I0121 16:48:01.314290 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-whz74_ff5898ae-a2fa-4ed4-9e56-74a5476ca185/manager/0.log" Jan 21 16:48:01 crc kubenswrapper[4834]: I0121 16:48:01.334189 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5b9875986ddp2nz_2097f21e-3f3a-435e-9003-15846c98efbd/manager/0.log" Jan 21 16:48:01 crc kubenswrapper[4834]: I0121 16:48:01.542803 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6d4d7d8545-pr4wt_c4e147a8-154c-4e8d-ad04-4a58a81b4942/operator/0.log" Jan 21 16:48:04 crc kubenswrapper[4834]: I0121 16:48:04.048028 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75bfd788c8-d4x4p_ce4027ee-3ff6-4f48-8eee-cac190eac5f9/manager/0.log" Jan 21 16:48:04 crc kubenswrapper[4834]: I0121 16:48:04.192812 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xvgpp_365a1666-9c4a-4834-9ae1-e275ff9051b8/registry-server/0.log" Jan 21 16:48:04 crc kubenswrapper[4834]: I0121 16:48:04.330219 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-cv9xt_2e0a31fa-5416-4a94-a915-69f0561794bc/manager/0.log" Jan 21 16:48:04 crc kubenswrapper[4834]: I0121 16:48:04.381239 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-28wfl_e8086139-0fe1-4859-8e4c-94eea0dd6a18/manager/0.log" Jan 21 16:48:04 crc kubenswrapper[4834]: I0121 16:48:04.413971 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wnz6s_bca7db69-2696-4e85-96dc-7c9140549f9a/operator/0.log" Jan 21 16:48:04 crc kubenswrapper[4834]: I0121 16:48:04.440643 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-lt695_69493e6c-ad47-4a9e-aad4-b93cee5d4ac7/manager/0.log" Jan 21 16:48:04 crc kubenswrapper[4834]: I0121 16:48:04.592557 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-hlcxr_458dc6de-2219-419b-ab95-76a72a77a097/manager/0.log" Jan 21 16:48:04 crc kubenswrapper[4834]: I0121 16:48:04.603621 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-92r4m_5c2b9737-fefd-4db7-a60d-5343a8b44554/manager/0.log" Jan 21 16:48:04 crc kubenswrapper[4834]: I0121 16:48:04.624260 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-44bkf_941ce977-5cf7-49e2-b96d-8446fefa95cd/manager/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.301953 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-66jlt_a8efbee9-2d1d-473f-ad38-b10d84821e23/kube-multus-additional-cni-plugins/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.320115 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-66jlt_a8efbee9-2d1d-473f-ad38-b10d84821e23/egress-router-binary-copy/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.339434 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-66jlt_a8efbee9-2d1d-473f-ad38-b10d84821e23/cni-plugins/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.353263 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-66jlt_a8efbee9-2d1d-473f-ad38-b10d84821e23/bond-cni-plugin/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.358574 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-66jlt_a8efbee9-2d1d-473f-ad38-b10d84821e23/routeoverride-cni/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.368813 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-66jlt_a8efbee9-2d1d-473f-ad38-b10d84821e23/whereabouts-cni-bincopy/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.374004 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-66jlt_a8efbee9-2d1d-473f-ad38-b10d84821e23/whereabouts-cni/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.416994 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-fl4gd_32b16913-b72e-4e5f-b684-913111a08bd7/multus-admission-controller/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.428053 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-fl4gd_32b16913-b72e-4e5f-b684-913111a08bd7/kube-rbac-proxy/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.502504 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/2.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.705910 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gd9jh_dbe1b4f9-f835-43ba-9496-a9e60af3b87f/kube-multus/3.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.763186 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dtqf2_d31034df-9ceb-49b0-9ad5-334dcaa28fa4/network-metrics-daemon/0.log" Jan 21 16:48:06 crc kubenswrapper[4834]: I0121 16:48:06.770819 4834 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dtqf2_d31034df-9ceb-49b0-9ad5-334dcaa28fa4/kube-rbac-proxy/0.log" Jan 21 16:48:17 crc kubenswrapper[4834]: I0121 16:48:17.113838 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:48:17 crc kubenswrapper[4834]: I0121 16:48:17.114443 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:48:47 crc kubenswrapper[4834]: I0121 16:48:47.113725 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:48:47 crc kubenswrapper[4834]: I0121 16:48:47.114232 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:48:47 crc kubenswrapper[4834]: I0121 16:48:47.114285 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:48:47 crc kubenswrapper[4834]: I0121 16:48:47.115271 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"688a6646fe124e2a90170883fc06ecfd7a53fc5f7bd2d7430cc910c492dbea1c"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:48:47 crc kubenswrapper[4834]: I0121 16:48:47.115329 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://688a6646fe124e2a90170883fc06ecfd7a53fc5f7bd2d7430cc910c492dbea1c" gracePeriod=600 Jan 21 16:48:48 crc kubenswrapper[4834]: I0121 16:48:48.177748 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="688a6646fe124e2a90170883fc06ecfd7a53fc5f7bd2d7430cc910c492dbea1c" exitCode=0 Jan 21 16:48:48 crc kubenswrapper[4834]: I0121 16:48:48.177914 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"688a6646fe124e2a90170883fc06ecfd7a53fc5f7bd2d7430cc910c492dbea1c"} Jan 21 16:48:48 crc kubenswrapper[4834]: I0121 16:48:48.178347 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09"} Jan 21 16:48:48 crc kubenswrapper[4834]: I0121 16:48:48.178393 4834 scope.go:117] "RemoveContainer" containerID="e24b10dfd763674887e9d6addbf148acac84b32ecfdfbbf9dbcf75c5d0c006cd" Jan 21 16:49:15 crc kubenswrapper[4834]: I0121 16:49:15.338778 4834 scope.go:117] "RemoveContainer" containerID="bbfc43ee85b976aede5aba63660d76fc7c904a1343294e9febce57a3303a7e94" Jan 21 16:49:15 crc kubenswrapper[4834]: I0121 16:49:15.364376 4834 scope.go:117] "RemoveContainer" containerID="62427e723d6a8df2823dec4036e0b86905e9a9fd77dd54c11e89a27c296d541b" Jan 21 16:49:15 crc kubenswrapper[4834]: I0121 16:49:15.434951 4834 scope.go:117] "RemoveContainer" containerID="f87b61f415c61d2153ffc06db88073fc07529cbaaaad496bc2e3bad619cf714a" Jan 21 16:49:15 crc kubenswrapper[4834]: I0121 16:49:15.474905 4834 scope.go:117] "RemoveContainer" containerID="53f8c73a37675553f4c49c3d435d4242e90126fe47cb116d2cf832859703a26c" Jan 21 16:50:47 crc kubenswrapper[4834]: I0121 16:50:47.113863 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:50:47 crc kubenswrapper[4834]: I0121 16:50:47.114571 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.352770 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6dv8t"] Jan 21 16:51:11 crc kubenswrapper[4834]: E0121 16:51:11.353758 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91140b0-26f7-4113-84dd-8df42f932446" containerName="extract-content" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.353773 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91140b0-26f7-4113-84dd-8df42f932446" containerName="extract-content" Jan 21 16:51:11 crc kubenswrapper[4834]: E0121 16:51:11.353799 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91140b0-26f7-4113-84dd-8df42f932446" containerName="registry-server" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.353805 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91140b0-26f7-4113-84dd-8df42f932446" containerName="registry-server" Jan 21 16:51:11 crc kubenswrapper[4834]: E0121 16:51:11.353830 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91140b0-26f7-4113-84dd-8df42f932446" containerName="extract-utilities" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.353836 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91140b0-26f7-4113-84dd-8df42f932446" containerName="extract-utilities" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.354068 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91140b0-26f7-4113-84dd-8df42f932446" containerName="registry-server" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.355743 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.373898 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dv8t"] Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.442754 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-utilities\") pod \"community-operators-6dv8t\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.443019 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-catalog-content\") pod \"community-operators-6dv8t\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.443191 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjn7d\" (UniqueName: \"kubernetes.io/projected/390ea8d8-6816-41b9-9d0e-005940081359-kube-api-access-jjn7d\") pod \"community-operators-6dv8t\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.544281 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjn7d\" (UniqueName: \"kubernetes.io/projected/390ea8d8-6816-41b9-9d0e-005940081359-kube-api-access-jjn7d\") pod \"community-operators-6dv8t\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.544464 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-utilities\") pod \"community-operators-6dv8t\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.544567 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-catalog-content\") pod \"community-operators-6dv8t\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.545119 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-utilities\") pod \"community-operators-6dv8t\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.545183 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-catalog-content\") pod \"community-operators-6dv8t\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.566968 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjn7d\" (UniqueName: \"kubernetes.io/projected/390ea8d8-6816-41b9-9d0e-005940081359-kube-api-access-jjn7d\") pod \"community-operators-6dv8t\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:11 crc kubenswrapper[4834]: I0121 16:51:11.687633 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:12 crc kubenswrapper[4834]: I0121 16:51:12.406298 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dv8t"] Jan 21 16:51:12 crc kubenswrapper[4834]: I0121 16:51:12.670674 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dv8t" event={"ID":"390ea8d8-6816-41b9-9d0e-005940081359","Type":"ContainerStarted","Data":"207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e"} Jan 21 16:51:12 crc kubenswrapper[4834]: I0121 16:51:12.671019 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dv8t" event={"ID":"390ea8d8-6816-41b9-9d0e-005940081359","Type":"ContainerStarted","Data":"d780fb22d87510b9d9337245ef22f655c9c5898f8bada31313521677cd0e4168"} Jan 21 16:51:12 crc kubenswrapper[4834]: I0121 16:51:12.673163 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:51:13 crc kubenswrapper[4834]: I0121 16:51:13.681540 4834 generic.go:334] "Generic (PLEG): container finished" podID="390ea8d8-6816-41b9-9d0e-005940081359" containerID="207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e" exitCode=0 Jan 21 16:51:13 crc kubenswrapper[4834]: I0121 16:51:13.681741 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dv8t" event={"ID":"390ea8d8-6816-41b9-9d0e-005940081359","Type":"ContainerDied","Data":"207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e"} Jan 21 16:51:14 crc kubenswrapper[4834]: I0121 16:51:14.707696 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dv8t" event={"ID":"390ea8d8-6816-41b9-9d0e-005940081359","Type":"ContainerStarted","Data":"969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66"} Jan 21 16:51:15 crc kubenswrapper[4834]: I0121 16:51:15.723262 4834 generic.go:334] "Generic (PLEG): container finished" podID="390ea8d8-6816-41b9-9d0e-005940081359" containerID="969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66" exitCode=0 Jan 21 16:51:15 crc kubenswrapper[4834]: I0121 16:51:15.723392 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dv8t" event={"ID":"390ea8d8-6816-41b9-9d0e-005940081359","Type":"ContainerDied","Data":"969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66"} Jan 21 16:51:16 crc kubenswrapper[4834]: I0121 16:51:16.735270 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dv8t" event={"ID":"390ea8d8-6816-41b9-9d0e-005940081359","Type":"ContainerStarted","Data":"6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7"} Jan 21 16:51:16 crc kubenswrapper[4834]: I0121 16:51:16.756315 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6dv8t" podStartSLOduration=2.294705688 podStartE2EDuration="5.756293153s" podCreationTimestamp="2026-01-21 16:51:11 +0000 UTC" firstStartedPulling="2026-01-21 16:51:12.672830893 +0000 UTC m=+8418.647179948" lastFinishedPulling="2026-01-21 16:51:16.134418368 +0000 UTC m=+8422.108767413" observedRunningTime="2026-01-21 16:51:16.754826038 +0000 UTC m=+8422.729175083" watchObservedRunningTime="2026-01-21 16:51:16.756293153 +0000 UTC m=+8422.730642198" Jan 21 16:51:17 crc kubenswrapper[4834]: I0121 16:51:17.113479 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:51:17 crc kubenswrapper[4834]: I0121 16:51:17.113544 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:51:21 crc kubenswrapper[4834]: I0121 16:51:21.688279 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:21 crc kubenswrapper[4834]: I0121 16:51:21.688653 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:21 crc kubenswrapper[4834]: I0121 16:51:21.734556 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:21 crc kubenswrapper[4834]: I0121 16:51:21.830421 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:21 crc kubenswrapper[4834]: I0121 16:51:21.972545 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dv8t"] Jan 21 16:51:23 crc kubenswrapper[4834]: I0121 16:51:23.805667 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6dv8t" podUID="390ea8d8-6816-41b9-9d0e-005940081359" containerName="registry-server" containerID="cri-o://6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7" gracePeriod=2 Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.319330 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.446304 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-catalog-content\") pod \"390ea8d8-6816-41b9-9d0e-005940081359\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.446359 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-utilities\") pod \"390ea8d8-6816-41b9-9d0e-005940081359\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.446562 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjn7d\" (UniqueName: \"kubernetes.io/projected/390ea8d8-6816-41b9-9d0e-005940081359-kube-api-access-jjn7d\") pod \"390ea8d8-6816-41b9-9d0e-005940081359\" (UID: \"390ea8d8-6816-41b9-9d0e-005940081359\") " Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.448828 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-utilities" (OuterVolumeSpecName: "utilities") pod "390ea8d8-6816-41b9-9d0e-005940081359" (UID: "390ea8d8-6816-41b9-9d0e-005940081359"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.452968 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/390ea8d8-6816-41b9-9d0e-005940081359-kube-api-access-jjn7d" (OuterVolumeSpecName: "kube-api-access-jjn7d") pod "390ea8d8-6816-41b9-9d0e-005940081359" (UID: "390ea8d8-6816-41b9-9d0e-005940081359"). InnerVolumeSpecName "kube-api-access-jjn7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.513044 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "390ea8d8-6816-41b9-9d0e-005940081359" (UID: "390ea8d8-6816-41b9-9d0e-005940081359"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.549630 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjn7d\" (UniqueName: \"kubernetes.io/projected/390ea8d8-6816-41b9-9d0e-005940081359-kube-api-access-jjn7d\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.549698 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.549723 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/390ea8d8-6816-41b9-9d0e-005940081359-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.819231 4834 generic.go:334] "Generic (PLEG): container finished" podID="390ea8d8-6816-41b9-9d0e-005940081359" containerID="6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7" exitCode=0 Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.819300 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dv8t" event={"ID":"390ea8d8-6816-41b9-9d0e-005940081359","Type":"ContainerDied","Data":"6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7"} Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.819359 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dv8t" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.819425 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dv8t" event={"ID":"390ea8d8-6816-41b9-9d0e-005940081359","Type":"ContainerDied","Data":"d780fb22d87510b9d9337245ef22f655c9c5898f8bada31313521677cd0e4168"} Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.819456 4834 scope.go:117] "RemoveContainer" containerID="6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.850012 4834 scope.go:117] "RemoveContainer" containerID="969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.877315 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dv8t"] Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.889520 4834 scope.go:117] "RemoveContainer" containerID="207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.891870 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6dv8t"] Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.933474 4834 scope.go:117] "RemoveContainer" containerID="6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7" Jan 21 16:51:24 crc kubenswrapper[4834]: E0121 16:51:24.933906 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7\": container with ID starting with 6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7 not found: ID does not exist" containerID="6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.933965 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7"} err="failed to get container status \"6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7\": rpc error: code = NotFound desc = could not find container \"6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7\": container with ID starting with 6cb0d463ef9b1b4d5a23b73ccdb3359777af2b9f71c180e378e57c3b9c7682d7 not found: ID does not exist" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.933996 4834 scope.go:117] "RemoveContainer" containerID="969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66" Jan 21 16:51:24 crc kubenswrapper[4834]: E0121 16:51:24.934420 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66\": container with ID starting with 969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66 not found: ID does not exist" containerID="969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.934541 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66"} err="failed to get container status \"969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66\": rpc error: code = NotFound desc = could not find container \"969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66\": container with ID starting with 969e8649feacec7c58a2d06e260070be00ab4f6a601994ad20c0970905cbaf66 not found: ID does not exist" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.934575 4834 scope.go:117] "RemoveContainer" containerID="207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e" Jan 21 16:51:24 crc kubenswrapper[4834]: E0121 16:51:24.935110 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e\": container with ID starting with 207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e not found: ID does not exist" containerID="207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e" Jan 21 16:51:24 crc kubenswrapper[4834]: I0121 16:51:24.935156 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e"} err="failed to get container status \"207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e\": rpc error: code = NotFound desc = could not find container \"207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e\": container with ID starting with 207048b8bb4d85f9d07bebaedb11c4649a0328a6a399904385b358d85c50613e not found: ID does not exist" Jan 21 16:51:26 crc kubenswrapper[4834]: I0121 16:51:26.342815 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="390ea8d8-6816-41b9-9d0e-005940081359" path="/var/lib/kubelet/pods/390ea8d8-6816-41b9-9d0e-005940081359/volumes" Jan 21 16:51:47 crc kubenswrapper[4834]: I0121 16:51:47.114251 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:51:47 crc kubenswrapper[4834]: I0121 16:51:47.115786 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:51:47 crc kubenswrapper[4834]: I0121 16:51:47.115893 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 16:51:47 crc kubenswrapper[4834]: I0121 16:51:47.116860 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:51:47 crc kubenswrapper[4834]: I0121 16:51:47.117036 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" gracePeriod=600 Jan 21 16:51:47 crc kubenswrapper[4834]: E0121 16:51:47.261000 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:51:48 crc kubenswrapper[4834]: I0121 16:51:48.071149 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" exitCode=0 Jan 21 16:51:48 crc kubenswrapper[4834]: I0121 16:51:48.071206 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09"} Jan 21 16:51:48 crc kubenswrapper[4834]: I0121 16:51:48.071250 4834 scope.go:117] "RemoveContainer" containerID="688a6646fe124e2a90170883fc06ecfd7a53fc5f7bd2d7430cc910c492dbea1c" Jan 21 16:51:48 crc kubenswrapper[4834]: I0121 16:51:48.072149 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:51:48 crc kubenswrapper[4834]: E0121 16:51:48.072639 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:52:00 crc kubenswrapper[4834]: I0121 16:52:00.325361 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:52:00 crc kubenswrapper[4834]: E0121 16:52:00.326189 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.324673 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:52:14 crc kubenswrapper[4834]: E0121 16:52:14.325541 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.479551 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qf5f9"] Jan 21 16:52:14 crc kubenswrapper[4834]: E0121 16:52:14.480322 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390ea8d8-6816-41b9-9d0e-005940081359" containerName="extract-utilities" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.480350 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="390ea8d8-6816-41b9-9d0e-005940081359" containerName="extract-utilities" Jan 21 16:52:14 crc kubenswrapper[4834]: E0121 16:52:14.480413 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390ea8d8-6816-41b9-9d0e-005940081359" containerName="extract-content" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.480422 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="390ea8d8-6816-41b9-9d0e-005940081359" containerName="extract-content" Jan 21 16:52:14 crc kubenswrapper[4834]: E0121 16:52:14.480440 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390ea8d8-6816-41b9-9d0e-005940081359" containerName="registry-server" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.480448 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="390ea8d8-6816-41b9-9d0e-005940081359" containerName="registry-server" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.480682 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="390ea8d8-6816-41b9-9d0e-005940081359" containerName="registry-server" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.483118 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.492888 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qf5f9"] Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.638600 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-catalog-content\") pod \"redhat-operators-qf5f9\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.638956 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-utilities\") pod \"redhat-operators-qf5f9\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.638988 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4mcz\" (UniqueName: \"kubernetes.io/projected/e5831b73-e052-414c-a239-d0a27d07bafb-kube-api-access-s4mcz\") pod \"redhat-operators-qf5f9\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.740685 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-utilities\") pod \"redhat-operators-qf5f9\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.740739 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4mcz\" (UniqueName: \"kubernetes.io/projected/e5831b73-e052-414c-a239-d0a27d07bafb-kube-api-access-s4mcz\") pod \"redhat-operators-qf5f9\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.740787 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-catalog-content\") pod \"redhat-operators-qf5f9\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.741667 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-catalog-content\") pod \"redhat-operators-qf5f9\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.741674 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-utilities\") pod \"redhat-operators-qf5f9\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.763977 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4mcz\" (UniqueName: \"kubernetes.io/projected/e5831b73-e052-414c-a239-d0a27d07bafb-kube-api-access-s4mcz\") pod \"redhat-operators-qf5f9\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:14 crc kubenswrapper[4834]: I0121 16:52:14.826037 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:15 crc kubenswrapper[4834]: I0121 16:52:15.437704 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qf5f9"] Jan 21 16:52:16 crc kubenswrapper[4834]: I0121 16:52:16.407035 4834 generic.go:334] "Generic (PLEG): container finished" podID="e5831b73-e052-414c-a239-d0a27d07bafb" containerID="76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e" exitCode=0 Jan 21 16:52:16 crc kubenswrapper[4834]: I0121 16:52:16.407151 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf5f9" event={"ID":"e5831b73-e052-414c-a239-d0a27d07bafb","Type":"ContainerDied","Data":"76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e"} Jan 21 16:52:16 crc kubenswrapper[4834]: I0121 16:52:16.407397 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf5f9" event={"ID":"e5831b73-e052-414c-a239-d0a27d07bafb","Type":"ContainerStarted","Data":"16e686b8da40fa2546d32015bfd5a92ef19af170f3fb7c31484cc9dd7d0b97d3"} Jan 21 16:52:18 crc kubenswrapper[4834]: I0121 16:52:18.430321 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf5f9" event={"ID":"e5831b73-e052-414c-a239-d0a27d07bafb","Type":"ContainerStarted","Data":"9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167"} Jan 21 16:52:20 crc kubenswrapper[4834]: I0121 16:52:20.457784 4834 generic.go:334] "Generic (PLEG): container finished" podID="e5831b73-e052-414c-a239-d0a27d07bafb" containerID="9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167" exitCode=0 Jan 21 16:52:20 crc kubenswrapper[4834]: I0121 16:52:20.457877 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf5f9" event={"ID":"e5831b73-e052-414c-a239-d0a27d07bafb","Type":"ContainerDied","Data":"9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167"} Jan 21 16:52:21 crc kubenswrapper[4834]: I0121 16:52:21.468213 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf5f9" event={"ID":"e5831b73-e052-414c-a239-d0a27d07bafb","Type":"ContainerStarted","Data":"26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595"} Jan 21 16:52:21 crc kubenswrapper[4834]: I0121 16:52:21.500237 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qf5f9" podStartSLOduration=2.864264472 podStartE2EDuration="7.500184018s" podCreationTimestamp="2026-01-21 16:52:14 +0000 UTC" firstStartedPulling="2026-01-21 16:52:16.411652632 +0000 UTC m=+8482.386001677" lastFinishedPulling="2026-01-21 16:52:21.047572178 +0000 UTC m=+8487.021921223" observedRunningTime="2026-01-21 16:52:21.494308324 +0000 UTC m=+8487.468657369" watchObservedRunningTime="2026-01-21 16:52:21.500184018 +0000 UTC m=+8487.474533053" Jan 21 16:52:24 crc kubenswrapper[4834]: I0121 16:52:24.826653 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:24 crc kubenswrapper[4834]: I0121 16:52:24.827479 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:25 crc kubenswrapper[4834]: I0121 16:52:25.883403 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qf5f9" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" containerName="registry-server" probeResult="failure" output=< Jan 21 16:52:25 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 16:52:25 crc kubenswrapper[4834]: > Jan 21 16:52:27 crc kubenswrapper[4834]: I0121 16:52:27.326213 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:52:27 crc kubenswrapper[4834]: E0121 16:52:27.326798 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:52:34 crc kubenswrapper[4834]: I0121 16:52:34.878736 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:34 crc kubenswrapper[4834]: I0121 16:52:34.940467 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:35 crc kubenswrapper[4834]: I0121 16:52:35.120780 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qf5f9"] Jan 21 16:52:36 crc kubenswrapper[4834]: I0121 16:52:36.638552 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qf5f9" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" containerName="registry-server" containerID="cri-o://26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595" gracePeriod=2 Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.276788 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.396330 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-utilities\") pod \"e5831b73-e052-414c-a239-d0a27d07bafb\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.396461 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-catalog-content\") pod \"e5831b73-e052-414c-a239-d0a27d07bafb\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.396513 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4mcz\" (UniqueName: \"kubernetes.io/projected/e5831b73-e052-414c-a239-d0a27d07bafb-kube-api-access-s4mcz\") pod \"e5831b73-e052-414c-a239-d0a27d07bafb\" (UID: \"e5831b73-e052-414c-a239-d0a27d07bafb\") " Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.401724 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-utilities" (OuterVolumeSpecName: "utilities") pod "e5831b73-e052-414c-a239-d0a27d07bafb" (UID: "e5831b73-e052-414c-a239-d0a27d07bafb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.404310 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5831b73-e052-414c-a239-d0a27d07bafb-kube-api-access-s4mcz" (OuterVolumeSpecName: "kube-api-access-s4mcz") pod "e5831b73-e052-414c-a239-d0a27d07bafb" (UID: "e5831b73-e052-414c-a239-d0a27d07bafb"). InnerVolumeSpecName "kube-api-access-s4mcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.498777 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4mcz\" (UniqueName: \"kubernetes.io/projected/e5831b73-e052-414c-a239-d0a27d07bafb-kube-api-access-s4mcz\") on node \"crc\" DevicePath \"\"" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.498810 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.533685 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5831b73-e052-414c-a239-d0a27d07bafb" (UID: "e5831b73-e052-414c-a239-d0a27d07bafb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.600386 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5831b73-e052-414c-a239-d0a27d07bafb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.649654 4834 generic.go:334] "Generic (PLEG): container finished" podID="e5831b73-e052-414c-a239-d0a27d07bafb" containerID="26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595" exitCode=0 Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.649708 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qf5f9" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.649711 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf5f9" event={"ID":"e5831b73-e052-414c-a239-d0a27d07bafb","Type":"ContainerDied","Data":"26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595"} Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.649772 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qf5f9" event={"ID":"e5831b73-e052-414c-a239-d0a27d07bafb","Type":"ContainerDied","Data":"16e686b8da40fa2546d32015bfd5a92ef19af170f3fb7c31484cc9dd7d0b97d3"} Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.649794 4834 scope.go:117] "RemoveContainer" containerID="26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.694213 4834 scope.go:117] "RemoveContainer" containerID="9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.699469 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qf5f9"] Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.711937 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qf5f9"] Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.717686 4834 scope.go:117] "RemoveContainer" containerID="76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.766095 4834 scope.go:117] "RemoveContainer" containerID="26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595" Jan 21 16:52:37 crc kubenswrapper[4834]: E0121 16:52:37.766531 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595\": container with ID starting with 26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595 not found: ID does not exist" containerID="26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.766570 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595"} err="failed to get container status \"26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595\": rpc error: code = NotFound desc = could not find container \"26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595\": container with ID starting with 26bf27d738b5bdb364de377eedea5c8da4c5d74c69b5b1708bcd00e42a5cc595 not found: ID does not exist" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.766597 4834 scope.go:117] "RemoveContainer" containerID="9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167" Jan 21 16:52:37 crc kubenswrapper[4834]: E0121 16:52:37.768558 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167\": container with ID starting with 9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167 not found: ID does not exist" containerID="9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.768603 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167"} err="failed to get container status \"9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167\": rpc error: code = NotFound desc = could not find container \"9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167\": container with ID starting with 9522883dca98dc04eaaa9023a9bcbafb67953d6d5867fee8cf786f5e556db167 not found: ID does not exist" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.768634 4834 scope.go:117] "RemoveContainer" containerID="76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e" Jan 21 16:52:37 crc kubenswrapper[4834]: E0121 16:52:37.768969 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e\": container with ID starting with 76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e not found: ID does not exist" containerID="76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e" Jan 21 16:52:37 crc kubenswrapper[4834]: I0121 16:52:37.769003 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e"} err="failed to get container status \"76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e\": rpc error: code = NotFound desc = could not find container \"76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e\": container with ID starting with 76b4a9a36c3b451be79713ec59174f4a22b805442dd3b5b7f9e6acac1735dd8e not found: ID does not exist" Jan 21 16:52:38 crc kubenswrapper[4834]: I0121 16:52:38.338468 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" path="/var/lib/kubelet/pods/e5831b73-e052-414c-a239-d0a27d07bafb/volumes" Jan 21 16:52:40 crc kubenswrapper[4834]: I0121 16:52:40.325577 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:52:40 crc kubenswrapper[4834]: E0121 16:52:40.328350 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:52:55 crc kubenswrapper[4834]: I0121 16:52:55.325491 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:52:55 crc kubenswrapper[4834]: E0121 16:52:55.326388 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:53:08 crc kubenswrapper[4834]: I0121 16:53:08.325099 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:53:08 crc kubenswrapper[4834]: E0121 16:53:08.326273 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:53:21 crc kubenswrapper[4834]: I0121 16:53:21.324533 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:53:21 crc kubenswrapper[4834]: E0121 16:53:21.325474 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.146278 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jwnvp"] Jan 21 16:53:27 crc kubenswrapper[4834]: E0121 16:53:27.148459 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" containerName="extract-content" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.148917 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" containerName="extract-content" Jan 21 16:53:27 crc kubenswrapper[4834]: E0121 16:53:27.149098 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" containerName="extract-utilities" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.149118 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" containerName="extract-utilities" Jan 21 16:53:27 crc kubenswrapper[4834]: E0121 16:53:27.149168 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" containerName="registry-server" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.149184 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" containerName="registry-server" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.150611 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5831b73-e052-414c-a239-d0a27d07bafb" containerName="registry-server" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.167241 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.186218 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwnvp"] Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.219592 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-utilities\") pod \"redhat-marketplace-jwnvp\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.219663 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pvq9\" (UniqueName: \"kubernetes.io/projected/64820f82-1886-4725-b0e3-3b62a6b6fc3b-kube-api-access-2pvq9\") pod \"redhat-marketplace-jwnvp\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.219692 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-catalog-content\") pod \"redhat-marketplace-jwnvp\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.321839 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-utilities\") pod \"redhat-marketplace-jwnvp\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.321970 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pvq9\" (UniqueName: \"kubernetes.io/projected/64820f82-1886-4725-b0e3-3b62a6b6fc3b-kube-api-access-2pvq9\") pod \"redhat-marketplace-jwnvp\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.322001 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-catalog-content\") pod \"redhat-marketplace-jwnvp\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.322555 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-utilities\") pod \"redhat-marketplace-jwnvp\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.322683 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-catalog-content\") pod \"redhat-marketplace-jwnvp\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.353126 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pvq9\" (UniqueName: \"kubernetes.io/projected/64820f82-1886-4725-b0e3-3b62a6b6fc3b-kube-api-access-2pvq9\") pod \"redhat-marketplace-jwnvp\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:27 crc kubenswrapper[4834]: I0121 16:53:27.509388 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:28 crc kubenswrapper[4834]: I0121 16:53:28.046328 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwnvp"] Jan 21 16:53:28 crc kubenswrapper[4834]: I0121 16:53:28.177837 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwnvp" event={"ID":"64820f82-1886-4725-b0e3-3b62a6b6fc3b","Type":"ContainerStarted","Data":"60fe5e8aad5c6791b2a1aa33377bed9c1ae8fefe5f9bd8d6c909e84b97255bdd"} Jan 21 16:53:29 crc kubenswrapper[4834]: I0121 16:53:29.190199 4834 generic.go:334] "Generic (PLEG): container finished" podID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerID="6ea81ca1fc3b350db7b68f0013cca13c4b7bdaea246138df167e02faa8dd14bb" exitCode=0 Jan 21 16:53:29 crc kubenswrapper[4834]: I0121 16:53:29.190525 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwnvp" event={"ID":"64820f82-1886-4725-b0e3-3b62a6b6fc3b","Type":"ContainerDied","Data":"6ea81ca1fc3b350db7b68f0013cca13c4b7bdaea246138df167e02faa8dd14bb"} Jan 21 16:53:30 crc kubenswrapper[4834]: I0121 16:53:30.217197 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwnvp" event={"ID":"64820f82-1886-4725-b0e3-3b62a6b6fc3b","Type":"ContainerStarted","Data":"1c37f754da2c53fcab31fc701d5ef239622682fc180b9d0edcaeb7c3f3a9cae0"} Jan 21 16:53:31 crc kubenswrapper[4834]: I0121 16:53:31.232607 4834 generic.go:334] "Generic (PLEG): container finished" podID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerID="1c37f754da2c53fcab31fc701d5ef239622682fc180b9d0edcaeb7c3f3a9cae0" exitCode=0 Jan 21 16:53:31 crc kubenswrapper[4834]: I0121 16:53:31.232742 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwnvp" event={"ID":"64820f82-1886-4725-b0e3-3b62a6b6fc3b","Type":"ContainerDied","Data":"1c37f754da2c53fcab31fc701d5ef239622682fc180b9d0edcaeb7c3f3a9cae0"} Jan 21 16:53:32 crc kubenswrapper[4834]: I0121 16:53:32.246969 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwnvp" event={"ID":"64820f82-1886-4725-b0e3-3b62a6b6fc3b","Type":"ContainerStarted","Data":"64f3fa47647f8ab03577cde8bcc861b7667da700b0f80c96c31437999bac5d0e"} Jan 21 16:53:32 crc kubenswrapper[4834]: I0121 16:53:32.265212 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jwnvp" podStartSLOduration=2.463471582 podStartE2EDuration="5.265191353s" podCreationTimestamp="2026-01-21 16:53:27 +0000 UTC" firstStartedPulling="2026-01-21 16:53:29.19228841 +0000 UTC m=+8555.166637455" lastFinishedPulling="2026-01-21 16:53:31.994008181 +0000 UTC m=+8557.968357226" observedRunningTime="2026-01-21 16:53:32.263036316 +0000 UTC m=+8558.237385371" watchObservedRunningTime="2026-01-21 16:53:32.265191353 +0000 UTC m=+8558.239540398" Jan 21 16:53:34 crc kubenswrapper[4834]: I0121 16:53:34.328856 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:53:34 crc kubenswrapper[4834]: E0121 16:53:34.329442 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:53:37 crc kubenswrapper[4834]: I0121 16:53:37.511734 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:37 crc kubenswrapper[4834]: I0121 16:53:37.512556 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:37 crc kubenswrapper[4834]: I0121 16:53:37.560001 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:38 crc kubenswrapper[4834]: I0121 16:53:38.372283 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:38 crc kubenswrapper[4834]: I0121 16:53:38.695766 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwnvp"] Jan 21 16:53:40 crc kubenswrapper[4834]: I0121 16:53:40.328671 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jwnvp" podUID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerName="registry-server" containerID="cri-o://64f3fa47647f8ab03577cde8bcc861b7667da700b0f80c96c31437999bac5d0e" gracePeriod=2 Jan 21 16:53:40 crc kubenswrapper[4834]: E0121 16:53:40.648357 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64820f82_1886_4725_b0e3_3b62a6b6fc3b.slice/crio-conmon-64f3fa47647f8ab03577cde8bcc861b7667da700b0f80c96c31437999bac5d0e.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.340605 4834 generic.go:334] "Generic (PLEG): container finished" podID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerID="64f3fa47647f8ab03577cde8bcc861b7667da700b0f80c96c31437999bac5d0e" exitCode=0 Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.340797 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwnvp" event={"ID":"64820f82-1886-4725-b0e3-3b62a6b6fc3b","Type":"ContainerDied","Data":"64f3fa47647f8ab03577cde8bcc861b7667da700b0f80c96c31437999bac5d0e"} Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.341150 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jwnvp" event={"ID":"64820f82-1886-4725-b0e3-3b62a6b6fc3b","Type":"ContainerDied","Data":"60fe5e8aad5c6791b2a1aa33377bed9c1ae8fefe5f9bd8d6c909e84b97255bdd"} Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.341171 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fe5e8aad5c6791b2a1aa33377bed9c1ae8fefe5f9bd8d6c909e84b97255bdd" Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.350439 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.467412 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pvq9\" (UniqueName: \"kubernetes.io/projected/64820f82-1886-4725-b0e3-3b62a6b6fc3b-kube-api-access-2pvq9\") pod \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.467673 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-utilities\") pod \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.467719 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-catalog-content\") pod \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\" (UID: \"64820f82-1886-4725-b0e3-3b62a6b6fc3b\") " Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.474028 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-utilities" (OuterVolumeSpecName: "utilities") pod "64820f82-1886-4725-b0e3-3b62a6b6fc3b" (UID: "64820f82-1886-4725-b0e3-3b62a6b6fc3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.474204 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64820f82-1886-4725-b0e3-3b62a6b6fc3b-kube-api-access-2pvq9" (OuterVolumeSpecName: "kube-api-access-2pvq9") pod "64820f82-1886-4725-b0e3-3b62a6b6fc3b" (UID: "64820f82-1886-4725-b0e3-3b62a6b6fc3b"). InnerVolumeSpecName "kube-api-access-2pvq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.496578 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64820f82-1886-4725-b0e3-3b62a6b6fc3b" (UID: "64820f82-1886-4725-b0e3-3b62a6b6fc3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.571116 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.571153 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pvq9\" (UniqueName: \"kubernetes.io/projected/64820f82-1886-4725-b0e3-3b62a6b6fc3b-kube-api-access-2pvq9\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:41 crc kubenswrapper[4834]: I0121 16:53:41.571168 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64820f82-1886-4725-b0e3-3b62a6b6fc3b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:53:42 crc kubenswrapper[4834]: I0121 16:53:42.349821 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jwnvp" Jan 21 16:53:42 crc kubenswrapper[4834]: I0121 16:53:42.381148 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwnvp"] Jan 21 16:53:42 crc kubenswrapper[4834]: I0121 16:53:42.391470 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jwnvp"] Jan 21 16:53:44 crc kubenswrapper[4834]: I0121 16:53:44.341601 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" path="/var/lib/kubelet/pods/64820f82-1886-4725-b0e3-3b62a6b6fc3b/volumes" Jan 21 16:53:49 crc kubenswrapper[4834]: I0121 16:53:49.326489 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:53:49 crc kubenswrapper[4834]: E0121 16:53:49.327288 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:54:01 crc kubenswrapper[4834]: I0121 16:54:01.324638 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:54:01 crc kubenswrapper[4834]: E0121 16:54:01.325552 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:54:15 crc kubenswrapper[4834]: I0121 16:54:15.325422 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:54:15 crc kubenswrapper[4834]: E0121 16:54:15.326548 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:54:26 crc kubenswrapper[4834]: I0121 16:54:26.325971 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:54:26 crc kubenswrapper[4834]: E0121 16:54:26.326992 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:54:41 crc kubenswrapper[4834]: I0121 16:54:41.324784 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:54:41 crc kubenswrapper[4834]: E0121 16:54:41.325517 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:54:56 crc kubenswrapper[4834]: I0121 16:54:56.328610 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:54:56 crc kubenswrapper[4834]: E0121 16:54:56.329656 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:55:11 crc kubenswrapper[4834]: I0121 16:55:11.324837 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:55:11 crc kubenswrapper[4834]: E0121 16:55:11.325716 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:55:24 crc kubenswrapper[4834]: I0121 16:55:24.331812 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:55:24 crc kubenswrapper[4834]: E0121 16:55:24.332588 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:55:38 crc kubenswrapper[4834]: I0121 16:55:38.325537 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:55:38 crc kubenswrapper[4834]: E0121 16:55:38.326398 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:55:45 crc kubenswrapper[4834]: I0121 16:55:45.947216 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sq59f"] Jan 21 16:55:45 crc kubenswrapper[4834]: E0121 16:55:45.948431 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerName="extract-content" Jan 21 16:55:45 crc kubenswrapper[4834]: I0121 16:55:45.948451 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerName="extract-content" Jan 21 16:55:45 crc kubenswrapper[4834]: E0121 16:55:45.948479 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerName="extract-utilities" Jan 21 16:55:45 crc kubenswrapper[4834]: I0121 16:55:45.948487 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerName="extract-utilities" Jan 21 16:55:45 crc kubenswrapper[4834]: E0121 16:55:45.948504 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerName="registry-server" Jan 21 16:55:45 crc kubenswrapper[4834]: I0121 16:55:45.948512 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerName="registry-server" Jan 21 16:55:45 crc kubenswrapper[4834]: I0121 16:55:45.948774 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="64820f82-1886-4725-b0e3-3b62a6b6fc3b" containerName="registry-server" Jan 21 16:55:45 crc kubenswrapper[4834]: I0121 16:55:45.951080 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:45 crc kubenswrapper[4834]: I0121 16:55:45.996845 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sq59f"] Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.117753 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-utilities\") pod \"certified-operators-sq59f\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.117867 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8n9\" (UniqueName: \"kubernetes.io/projected/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-kube-api-access-fb8n9\") pod \"certified-operators-sq59f\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.122882 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-catalog-content\") pod \"certified-operators-sq59f\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.225424 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-utilities\") pod \"certified-operators-sq59f\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.225862 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8n9\" (UniqueName: \"kubernetes.io/projected/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-kube-api-access-fb8n9\") pod \"certified-operators-sq59f\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.226056 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-catalog-content\") pod \"certified-operators-sq59f\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.227332 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-catalog-content\") pod \"certified-operators-sq59f\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.227842 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-utilities\") pod \"certified-operators-sq59f\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.252920 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8n9\" (UniqueName: \"kubernetes.io/projected/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-kube-api-access-fb8n9\") pod \"certified-operators-sq59f\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.299500 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:46 crc kubenswrapper[4834]: I0121 16:55:46.916771 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sq59f"] Jan 21 16:55:47 crc kubenswrapper[4834]: I0121 16:55:47.690400 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq59f" event={"ID":"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8","Type":"ContainerStarted","Data":"cfc95477215d8a80ebb88211cc0e9be70cbce83abe4d1d927c4081e42f9d435e"} Jan 21 16:55:48 crc kubenswrapper[4834]: I0121 16:55:48.701583 4834 generic.go:334] "Generic (PLEG): container finished" podID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerID="e61fa000f4c62b5e48289f4292ad7d086fa759807c51930978217bc93a37c8c9" exitCode=0 Jan 21 16:55:48 crc kubenswrapper[4834]: I0121 16:55:48.702468 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq59f" event={"ID":"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8","Type":"ContainerDied","Data":"e61fa000f4c62b5e48289f4292ad7d086fa759807c51930978217bc93a37c8c9"} Jan 21 16:55:49 crc kubenswrapper[4834]: I0121 16:55:49.717885 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq59f" event={"ID":"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8","Type":"ContainerStarted","Data":"24aa3a75a8bd2156e3fa07bac45e387d37cc9f76dafd0e764db1a1c237f911dc"} Jan 21 16:55:51 crc kubenswrapper[4834]: I0121 16:55:51.741532 4834 generic.go:334] "Generic (PLEG): container finished" podID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerID="24aa3a75a8bd2156e3fa07bac45e387d37cc9f76dafd0e764db1a1c237f911dc" exitCode=0 Jan 21 16:55:51 crc kubenswrapper[4834]: I0121 16:55:51.741608 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq59f" event={"ID":"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8","Type":"ContainerDied","Data":"24aa3a75a8bd2156e3fa07bac45e387d37cc9f76dafd0e764db1a1c237f911dc"} Jan 21 16:55:52 crc kubenswrapper[4834]: I0121 16:55:52.764349 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq59f" event={"ID":"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8","Type":"ContainerStarted","Data":"9cce35d329314392eccc7f7c9691323570ca05ba77d78425e41bbd1eb78ea286"} Jan 21 16:55:52 crc kubenswrapper[4834]: I0121 16:55:52.791407 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sq59f" podStartSLOduration=4.302228336 podStartE2EDuration="7.791373834s" podCreationTimestamp="2026-01-21 16:55:45 +0000 UTC" firstStartedPulling="2026-01-21 16:55:48.705301637 +0000 UTC m=+8694.679650682" lastFinishedPulling="2026-01-21 16:55:52.194447135 +0000 UTC m=+8698.168796180" observedRunningTime="2026-01-21 16:55:52.784785059 +0000 UTC m=+8698.759134144" watchObservedRunningTime="2026-01-21 16:55:52.791373834 +0000 UTC m=+8698.765722869" Jan 21 16:55:53 crc kubenswrapper[4834]: I0121 16:55:53.326845 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:55:53 crc kubenswrapper[4834]: E0121 16:55:53.329169 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:55:56 crc kubenswrapper[4834]: I0121 16:55:56.300466 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:56 crc kubenswrapper[4834]: I0121 16:55:56.301494 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:55:56 crc kubenswrapper[4834]: I0121 16:55:56.741292 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:56:04 crc kubenswrapper[4834]: I0121 16:56:04.325248 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:56:04 crc kubenswrapper[4834]: E0121 16:56:04.326015 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:56:06 crc kubenswrapper[4834]: I0121 16:56:06.351228 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:56:06 crc kubenswrapper[4834]: I0121 16:56:06.401193 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sq59f"] Jan 21 16:56:06 crc kubenswrapper[4834]: I0121 16:56:06.921614 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sq59f" podUID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerName="registry-server" containerID="cri-o://9cce35d329314392eccc7f7c9691323570ca05ba77d78425e41bbd1eb78ea286" gracePeriod=2 Jan 21 16:56:07 crc kubenswrapper[4834]: I0121 16:56:07.968507 4834 generic.go:334] "Generic (PLEG): container finished" podID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerID="9cce35d329314392eccc7f7c9691323570ca05ba77d78425e41bbd1eb78ea286" exitCode=0 Jan 21 16:56:07 crc kubenswrapper[4834]: I0121 16:56:07.968620 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq59f" event={"ID":"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8","Type":"ContainerDied","Data":"9cce35d329314392eccc7f7c9691323570ca05ba77d78425e41bbd1eb78ea286"} Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.336774 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.468858 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb8n9\" (UniqueName: \"kubernetes.io/projected/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-kube-api-access-fb8n9\") pod \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.469043 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-catalog-content\") pod \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.469216 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-utilities\") pod \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\" (UID: \"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8\") " Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.470644 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-utilities" (OuterVolumeSpecName: "utilities") pod "06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" (UID: "06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.476489 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-kube-api-access-fb8n9" (OuterVolumeSpecName: "kube-api-access-fb8n9") pod "06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" (UID: "06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8"). InnerVolumeSpecName "kube-api-access-fb8n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.517660 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" (UID: "06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.572302 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.572339 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.572351 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb8n9\" (UniqueName: \"kubernetes.io/projected/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8-kube-api-access-fb8n9\") on node \"crc\" DevicePath \"\"" Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.992290 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq59f" event={"ID":"06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8","Type":"ContainerDied","Data":"cfc95477215d8a80ebb88211cc0e9be70cbce83abe4d1d927c4081e42f9d435e"} Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.992351 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq59f" Jan 21 16:56:08 crc kubenswrapper[4834]: I0121 16:56:08.993509 4834 scope.go:117] "RemoveContainer" containerID="9cce35d329314392eccc7f7c9691323570ca05ba77d78425e41bbd1eb78ea286" Jan 21 16:56:09 crc kubenswrapper[4834]: I0121 16:56:09.033597 4834 scope.go:117] "RemoveContainer" containerID="24aa3a75a8bd2156e3fa07bac45e387d37cc9f76dafd0e764db1a1c237f911dc" Jan 21 16:56:09 crc kubenswrapper[4834]: I0121 16:56:09.043770 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sq59f"] Jan 21 16:56:09 crc kubenswrapper[4834]: I0121 16:56:09.055776 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sq59f"] Jan 21 16:56:09 crc kubenswrapper[4834]: I0121 16:56:09.063240 4834 scope.go:117] "RemoveContainer" containerID="e61fa000f4c62b5e48289f4292ad7d086fa759807c51930978217bc93a37c8c9" Jan 21 16:56:10 crc kubenswrapper[4834]: I0121 16:56:10.340739 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" path="/var/lib/kubelet/pods/06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8/volumes" Jan 21 16:56:18 crc kubenswrapper[4834]: I0121 16:56:18.325965 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:56:18 crc kubenswrapper[4834]: E0121 16:56:18.327010 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:56:33 crc kubenswrapper[4834]: I0121 16:56:33.325110 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:56:33 crc kubenswrapper[4834]: E0121 16:56:33.325998 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 16:56:48 crc kubenswrapper[4834]: I0121 16:56:48.326431 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 16:56:48 crc kubenswrapper[4834]: I0121 16:56:48.675038 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"7a43f1496aaaa188e26f03279b52cea0cb5732647ba53a6bc24d88b146f2bcd9"} Jan 21 16:59:17 crc kubenswrapper[4834]: I0121 16:59:17.114232 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:17 crc kubenswrapper[4834]: I0121 16:59:17.114739 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:47 crc kubenswrapper[4834]: I0121 16:59:47.114087 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:47 crc kubenswrapper[4834]: I0121 16:59:47.115730 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.169602 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4"] Jan 21 17:00:00 crc kubenswrapper[4834]: E0121 17:00:00.170798 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerName="extract-utilities" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.170821 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerName="extract-utilities" Jan 21 17:00:00 crc kubenswrapper[4834]: E0121 17:00:00.170859 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerName="extract-content" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.170867 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerName="extract-content" Jan 21 17:00:00 crc kubenswrapper[4834]: E0121 17:00:00.170910 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerName="registry-server" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.170918 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerName="registry-server" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.171149 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b0d52e-ef0b-4e46-a2ab-ad268e6e8ce8" containerName="registry-server" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.171967 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.174985 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.175222 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.180098 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4"] Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.329966 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca16723a-4305-4ec4-b578-543871eb197f-secret-volume\") pod \"collect-profiles-29483580-6l5v4\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.330157 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca16723a-4305-4ec4-b578-543871eb197f-config-volume\") pod \"collect-profiles-29483580-6l5v4\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.330253 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6sdq\" (UniqueName: \"kubernetes.io/projected/ca16723a-4305-4ec4-b578-543871eb197f-kube-api-access-l6sdq\") pod \"collect-profiles-29483580-6l5v4\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.432485 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca16723a-4305-4ec4-b578-543871eb197f-secret-volume\") pod \"collect-profiles-29483580-6l5v4\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.432700 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca16723a-4305-4ec4-b578-543871eb197f-config-volume\") pod \"collect-profiles-29483580-6l5v4\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.434617 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca16723a-4305-4ec4-b578-543871eb197f-config-volume\") pod \"collect-profiles-29483580-6l5v4\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.446417 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6sdq\" (UniqueName: \"kubernetes.io/projected/ca16723a-4305-4ec4-b578-543871eb197f-kube-api-access-l6sdq\") pod \"collect-profiles-29483580-6l5v4\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.446643 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca16723a-4305-4ec4-b578-543871eb197f-secret-volume\") pod \"collect-profiles-29483580-6l5v4\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.473752 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6sdq\" (UniqueName: \"kubernetes.io/projected/ca16723a-4305-4ec4-b578-543871eb197f-kube-api-access-l6sdq\") pod \"collect-profiles-29483580-6l5v4\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:00 crc kubenswrapper[4834]: I0121 17:00:00.504942 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:01 crc kubenswrapper[4834]: I0121 17:00:01.057062 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4"] Jan 21 17:00:01 crc kubenswrapper[4834]: I0121 17:00:01.076348 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" event={"ID":"ca16723a-4305-4ec4-b578-543871eb197f","Type":"ContainerStarted","Data":"d9eea1c0d1777ca3cc0112a6b1e5aa5eff03dd8cd6888b96aee3aafe49cee56b"} Jan 21 17:00:02 crc kubenswrapper[4834]: I0121 17:00:02.088347 4834 generic.go:334] "Generic (PLEG): container finished" podID="ca16723a-4305-4ec4-b578-543871eb197f" containerID="f50ff1807b07f6626ec51ef34961480f3339d328cac03b846abbdb971b9cd249" exitCode=0 Jan 21 17:00:02 crc kubenswrapper[4834]: I0121 17:00:02.088560 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" event={"ID":"ca16723a-4305-4ec4-b578-543871eb197f","Type":"ContainerDied","Data":"f50ff1807b07f6626ec51ef34961480f3339d328cac03b846abbdb971b9cd249"} Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.484839 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.632470 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6sdq\" (UniqueName: \"kubernetes.io/projected/ca16723a-4305-4ec4-b578-543871eb197f-kube-api-access-l6sdq\") pod \"ca16723a-4305-4ec4-b578-543871eb197f\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.632618 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca16723a-4305-4ec4-b578-543871eb197f-secret-volume\") pod \"ca16723a-4305-4ec4-b578-543871eb197f\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.632901 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca16723a-4305-4ec4-b578-543871eb197f-config-volume\") pod \"ca16723a-4305-4ec4-b578-543871eb197f\" (UID: \"ca16723a-4305-4ec4-b578-543871eb197f\") " Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.634377 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca16723a-4305-4ec4-b578-543871eb197f-config-volume" (OuterVolumeSpecName: "config-volume") pod "ca16723a-4305-4ec4-b578-543871eb197f" (UID: "ca16723a-4305-4ec4-b578-543871eb197f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.641233 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca16723a-4305-4ec4-b578-543871eb197f-kube-api-access-l6sdq" (OuterVolumeSpecName: "kube-api-access-l6sdq") pod "ca16723a-4305-4ec4-b578-543871eb197f" (UID: "ca16723a-4305-4ec4-b578-543871eb197f"). InnerVolumeSpecName "kube-api-access-l6sdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.642723 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca16723a-4305-4ec4-b578-543871eb197f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ca16723a-4305-4ec4-b578-543871eb197f" (UID: "ca16723a-4305-4ec4-b578-543871eb197f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.735449 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca16723a-4305-4ec4-b578-543871eb197f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.735489 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6sdq\" (UniqueName: \"kubernetes.io/projected/ca16723a-4305-4ec4-b578-543871eb197f-kube-api-access-l6sdq\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4834]: I0121 17:00:03.735499 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca16723a-4305-4ec4-b578-543871eb197f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4834]: I0121 17:00:04.112674 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" event={"ID":"ca16723a-4305-4ec4-b578-543871eb197f","Type":"ContainerDied","Data":"d9eea1c0d1777ca3cc0112a6b1e5aa5eff03dd8cd6888b96aee3aafe49cee56b"} Jan 21 17:00:04 crc kubenswrapper[4834]: I0121 17:00:04.113082 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9eea1c0d1777ca3cc0112a6b1e5aa5eff03dd8cd6888b96aee3aafe49cee56b" Jan 21 17:00:04 crc kubenswrapper[4834]: I0121 17:00:04.112732 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-6l5v4" Jan 21 17:00:04 crc kubenswrapper[4834]: I0121 17:00:04.569555 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj"] Jan 21 17:00:04 crc kubenswrapper[4834]: I0121 17:00:04.581045 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-2v2sj"] Jan 21 17:00:06 crc kubenswrapper[4834]: I0121 17:00:06.338892 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="594e11d5-6b18-492e-9096-a898326ce42f" path="/var/lib/kubelet/pods/594e11d5-6b18-492e-9096-a898326ce42f/volumes" Jan 21 17:00:16 crc kubenswrapper[4834]: I0121 17:00:16.434112 4834 scope.go:117] "RemoveContainer" containerID="1c37f754da2c53fcab31fc701d5ef239622682fc180b9d0edcaeb7c3f3a9cae0" Jan 21 17:00:16 crc kubenswrapper[4834]: I0121 17:00:16.677622 4834 scope.go:117] "RemoveContainer" containerID="6ea81ca1fc3b350db7b68f0013cca13c4b7bdaea246138df167e02faa8dd14bb" Jan 21 17:00:16 crc kubenswrapper[4834]: I0121 17:00:16.699359 4834 scope.go:117] "RemoveContainer" containerID="64f3fa47647f8ab03577cde8bcc861b7667da700b0f80c96c31437999bac5d0e" Jan 21 17:00:16 crc kubenswrapper[4834]: I0121 17:00:16.747948 4834 scope.go:117] "RemoveContainer" containerID="96b77a1c531feee6a7c85c727ba18d11fff5889fdce6c7f84056303d3bcc128f" Jan 21 17:00:17 crc kubenswrapper[4834]: I0121 17:00:17.113973 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:00:17 crc kubenswrapper[4834]: I0121 17:00:17.114245 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:00:17 crc kubenswrapper[4834]: I0121 17:00:17.114291 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 17:00:17 crc kubenswrapper[4834]: I0121 17:00:17.115240 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a43f1496aaaa188e26f03279b52cea0cb5732647ba53a6bc24d88b146f2bcd9"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:00:17 crc kubenswrapper[4834]: I0121 17:00:17.115311 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://7a43f1496aaaa188e26f03279b52cea0cb5732647ba53a6bc24d88b146f2bcd9" gracePeriod=600 Jan 21 17:00:18 crc kubenswrapper[4834]: I0121 17:00:18.254888 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="7a43f1496aaaa188e26f03279b52cea0cb5732647ba53a6bc24d88b146f2bcd9" exitCode=0 Jan 21 17:00:18 crc kubenswrapper[4834]: I0121 17:00:18.254981 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"7a43f1496aaaa188e26f03279b52cea0cb5732647ba53a6bc24d88b146f2bcd9"} Jan 21 17:00:18 crc kubenswrapper[4834]: I0121 17:00:18.255306 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171"} Jan 21 17:00:18 crc kubenswrapper[4834]: I0121 17:00:18.256031 4834 scope.go:117] "RemoveContainer" containerID="8a772ccea3b53019f1b6e128f3a8fedbcd8fc44ad0cd75e5a7a862acd2152e09" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.160438 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483581-8jpm5"] Jan 21 17:01:00 crc kubenswrapper[4834]: E0121 17:01:00.162025 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca16723a-4305-4ec4-b578-543871eb197f" containerName="collect-profiles" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.162048 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca16723a-4305-4ec4-b578-543871eb197f" containerName="collect-profiles" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.162297 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca16723a-4305-4ec4-b578-543871eb197f" containerName="collect-profiles" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.163321 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.176349 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-8jpm5"] Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.256793 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-combined-ca-bundle\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.257348 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-fernet-keys\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.257613 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprzl\" (UniqueName: \"kubernetes.io/projected/4606879e-c3cc-4038-8b13-6faf73b64054-kube-api-access-pprzl\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.257753 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-config-data\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.359785 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pprzl\" (UniqueName: \"kubernetes.io/projected/4606879e-c3cc-4038-8b13-6faf73b64054-kube-api-access-pprzl\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.359856 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-config-data\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.359920 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-combined-ca-bundle\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.360110 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-fernet-keys\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.366541 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-fernet-keys\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.375959 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-config-data\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.377257 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-combined-ca-bundle\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.378691 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprzl\" (UniqueName: \"kubernetes.io/projected/4606879e-c3cc-4038-8b13-6faf73b64054-kube-api-access-pprzl\") pod \"keystone-cron-29483581-8jpm5\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:00 crc kubenswrapper[4834]: I0121 17:01:00.499527 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:01 crc kubenswrapper[4834]: I0121 17:01:01.093571 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-8jpm5"] Jan 21 17:01:01 crc kubenswrapper[4834]: I0121 17:01:01.760869 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-8jpm5" event={"ID":"4606879e-c3cc-4038-8b13-6faf73b64054","Type":"ContainerStarted","Data":"17c5f5890ff19b0998b5e55597247e256ec50d1205bf60b46e957d34081a58cc"} Jan 21 17:01:01 crc kubenswrapper[4834]: I0121 17:01:01.761412 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-8jpm5" event={"ID":"4606879e-c3cc-4038-8b13-6faf73b64054","Type":"ContainerStarted","Data":"4af84bb818a1c54612ea448d32fc7ef68a7880e779cd937e78b84df726970f05"} Jan 21 17:01:01 crc kubenswrapper[4834]: I0121 17:01:01.798203 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483581-8jpm5" podStartSLOduration=1.7981750459999999 podStartE2EDuration="1.798175046s" podCreationTimestamp="2026-01-21 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:01:01.790867107 +0000 UTC m=+9007.765216162" watchObservedRunningTime="2026-01-21 17:01:01.798175046 +0000 UTC m=+9007.772524101" Jan 21 17:01:04 crc kubenswrapper[4834]: I0121 17:01:04.862120 4834 generic.go:334] "Generic (PLEG): container finished" podID="4606879e-c3cc-4038-8b13-6faf73b64054" containerID="17c5f5890ff19b0998b5e55597247e256ec50d1205bf60b46e957d34081a58cc" exitCode=0 Jan 21 17:01:04 crc kubenswrapper[4834]: I0121 17:01:04.862217 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-8jpm5" event={"ID":"4606879e-c3cc-4038-8b13-6faf73b64054","Type":"ContainerDied","Data":"17c5f5890ff19b0998b5e55597247e256ec50d1205bf60b46e957d34081a58cc"} Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.209664 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.361634 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-fernet-keys\") pod \"4606879e-c3cc-4038-8b13-6faf73b64054\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.361698 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-combined-ca-bundle\") pod \"4606879e-c3cc-4038-8b13-6faf73b64054\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.361780 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-config-data\") pod \"4606879e-c3cc-4038-8b13-6faf73b64054\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.361920 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pprzl\" (UniqueName: \"kubernetes.io/projected/4606879e-c3cc-4038-8b13-6faf73b64054-kube-api-access-pprzl\") pod \"4606879e-c3cc-4038-8b13-6faf73b64054\" (UID: \"4606879e-c3cc-4038-8b13-6faf73b64054\") " Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.884356 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-8jpm5" event={"ID":"4606879e-c3cc-4038-8b13-6faf73b64054","Type":"ContainerDied","Data":"4af84bb818a1c54612ea448d32fc7ef68a7880e779cd937e78b84df726970f05"} Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.884630 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4af84bb818a1c54612ea448d32fc7ef68a7880e779cd937e78b84df726970f05" Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.884425 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-8jpm5" Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.896728 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4606879e-c3cc-4038-8b13-6faf73b64054" (UID: "4606879e-c3cc-4038-8b13-6faf73b64054"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.900150 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4606879e-c3cc-4038-8b13-6faf73b64054-kube-api-access-pprzl" (OuterVolumeSpecName: "kube-api-access-pprzl") pod "4606879e-c3cc-4038-8b13-6faf73b64054" (UID: "4606879e-c3cc-4038-8b13-6faf73b64054"). InnerVolumeSpecName "kube-api-access-pprzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.977653 4834 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.977686 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pprzl\" (UniqueName: \"kubernetes.io/projected/4606879e-c3cc-4038-8b13-6faf73b64054-kube-api-access-pprzl\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4834]: I0121 17:01:06.989871 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4606879e-c3cc-4038-8b13-6faf73b64054" (UID: "4606879e-c3cc-4038-8b13-6faf73b64054"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:07 crc kubenswrapper[4834]: I0121 17:01:07.010259 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-config-data" (OuterVolumeSpecName: "config-data") pod "4606879e-c3cc-4038-8b13-6faf73b64054" (UID: "4606879e-c3cc-4038-8b13-6faf73b64054"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:07 crc kubenswrapper[4834]: I0121 17:01:07.079704 4834 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:07 crc kubenswrapper[4834]: I0121 17:01:07.080143 4834 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4606879e-c3cc-4038-8b13-6faf73b64054-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.237360 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l5v24"] Jan 21 17:01:17 crc kubenswrapper[4834]: E0121 17:01:17.238438 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4606879e-c3cc-4038-8b13-6faf73b64054" containerName="keystone-cron" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.238453 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="4606879e-c3cc-4038-8b13-6faf73b64054" containerName="keystone-cron" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.238786 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="4606879e-c3cc-4038-8b13-6faf73b64054" containerName="keystone-cron" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.240746 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.248033 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5v24"] Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.328718 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-utilities\") pod \"community-operators-l5v24\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.328824 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njkpd\" (UniqueName: \"kubernetes.io/projected/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-kube-api-access-njkpd\") pod \"community-operators-l5v24\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.329020 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-catalog-content\") pod \"community-operators-l5v24\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.431109 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-utilities\") pod \"community-operators-l5v24\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.431514 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njkpd\" (UniqueName: \"kubernetes.io/projected/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-kube-api-access-njkpd\") pod \"community-operators-l5v24\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.431567 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-catalog-content\") pod \"community-operators-l5v24\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.431991 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-utilities\") pod \"community-operators-l5v24\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.433576 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-catalog-content\") pod \"community-operators-l5v24\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.452132 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njkpd\" (UniqueName: \"kubernetes.io/projected/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-kube-api-access-njkpd\") pod \"community-operators-l5v24\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:17 crc kubenswrapper[4834]: I0121 17:01:17.558638 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:18 crc kubenswrapper[4834]: I0121 17:01:18.234442 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l5v24"] Jan 21 17:01:19 crc kubenswrapper[4834]: I0121 17:01:19.029983 4834 generic.go:334] "Generic (PLEG): container finished" podID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerID="c0593c282d23bec8b0fde83bc8b72352ec9fd063f61cd79c7a5dd86964020284" exitCode=0 Jan 21 17:01:19 crc kubenswrapper[4834]: I0121 17:01:19.030060 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5v24" event={"ID":"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1","Type":"ContainerDied","Data":"c0593c282d23bec8b0fde83bc8b72352ec9fd063f61cd79c7a5dd86964020284"} Jan 21 17:01:19 crc kubenswrapper[4834]: I0121 17:01:19.030263 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5v24" event={"ID":"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1","Type":"ContainerStarted","Data":"f3c408330a24327aeb6b13c6483ca27a28f85503943e00e258e32adf89843dec"} Jan 21 17:01:19 crc kubenswrapper[4834]: I0121 17:01:19.040121 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:01:20 crc kubenswrapper[4834]: I0121 17:01:20.044798 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5v24" event={"ID":"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1","Type":"ContainerStarted","Data":"b1398e1825ac4feb90c1e4ec98a1a53e22478cada8824024a908ee0dc0c3db54"} Jan 21 17:01:21 crc kubenswrapper[4834]: I0121 17:01:21.055865 4834 generic.go:334] "Generic (PLEG): container finished" podID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerID="b1398e1825ac4feb90c1e4ec98a1a53e22478cada8824024a908ee0dc0c3db54" exitCode=0 Jan 21 17:01:21 crc kubenswrapper[4834]: I0121 17:01:21.055911 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5v24" event={"ID":"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1","Type":"ContainerDied","Data":"b1398e1825ac4feb90c1e4ec98a1a53e22478cada8824024a908ee0dc0c3db54"} Jan 21 17:01:22 crc kubenswrapper[4834]: I0121 17:01:22.070583 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5v24" event={"ID":"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1","Type":"ContainerStarted","Data":"f08667e4b8beb334b3e4d1cb2bdb3c110d3423d239bd28d0ebbc951c8b5c046b"} Jan 21 17:01:22 crc kubenswrapper[4834]: I0121 17:01:22.095184 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l5v24" podStartSLOduration=2.667914421 podStartE2EDuration="5.095170614s" podCreationTimestamp="2026-01-21 17:01:17 +0000 UTC" firstStartedPulling="2026-01-21 17:01:19.039505078 +0000 UTC m=+9025.013854113" lastFinishedPulling="2026-01-21 17:01:21.466761261 +0000 UTC m=+9027.441110306" observedRunningTime="2026-01-21 17:01:22.087936837 +0000 UTC m=+9028.062285892" watchObservedRunningTime="2026-01-21 17:01:22.095170614 +0000 UTC m=+9028.069519659" Jan 21 17:01:27 crc kubenswrapper[4834]: I0121 17:01:27.560474 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:27 crc kubenswrapper[4834]: I0121 17:01:27.561481 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:27 crc kubenswrapper[4834]: I0121 17:01:27.614649 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:28 crc kubenswrapper[4834]: I0121 17:01:28.544174 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:31 crc kubenswrapper[4834]: I0121 17:01:31.222712 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5v24"] Jan 21 17:01:31 crc kubenswrapper[4834]: I0121 17:01:31.223412 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l5v24" podUID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerName="registry-server" containerID="cri-o://f08667e4b8beb334b3e4d1cb2bdb3c110d3423d239bd28d0ebbc951c8b5c046b" gracePeriod=2 Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.172612 4834 generic.go:334] "Generic (PLEG): container finished" podID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerID="f08667e4b8beb334b3e4d1cb2bdb3c110d3423d239bd28d0ebbc951c8b5c046b" exitCode=0 Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.172847 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5v24" event={"ID":"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1","Type":"ContainerDied","Data":"f08667e4b8beb334b3e4d1cb2bdb3c110d3423d239bd28d0ebbc951c8b5c046b"} Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.173368 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l5v24" event={"ID":"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1","Type":"ContainerDied","Data":"f3c408330a24327aeb6b13c6483ca27a28f85503943e00e258e32adf89843dec"} Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.173388 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3c408330a24327aeb6b13c6483ca27a28f85503943e00e258e32adf89843dec" Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.246836 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.408571 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njkpd\" (UniqueName: \"kubernetes.io/projected/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-kube-api-access-njkpd\") pod \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.408900 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-catalog-content\") pod \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.408969 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-utilities\") pod \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\" (UID: \"6b0c9178-59d8-4b60-81d5-5f55bc2f97b1\") " Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.410029 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-utilities" (OuterVolumeSpecName: "utilities") pod "6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" (UID: "6b0c9178-59d8-4b60-81d5-5f55bc2f97b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.414695 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-kube-api-access-njkpd" (OuterVolumeSpecName: "kube-api-access-njkpd") pod "6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" (UID: "6b0c9178-59d8-4b60-81d5-5f55bc2f97b1"). InnerVolumeSpecName "kube-api-access-njkpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.463374 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" (UID: "6b0c9178-59d8-4b60-81d5-5f55bc2f97b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.512045 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.512086 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:32 crc kubenswrapper[4834]: I0121 17:01:32.512095 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njkpd\" (UniqueName: \"kubernetes.io/projected/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1-kube-api-access-njkpd\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:33 crc kubenswrapper[4834]: I0121 17:01:33.185319 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l5v24" Jan 21 17:01:33 crc kubenswrapper[4834]: I0121 17:01:33.235632 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l5v24"] Jan 21 17:01:33 crc kubenswrapper[4834]: I0121 17:01:33.250369 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l5v24"] Jan 21 17:01:34 crc kubenswrapper[4834]: I0121 17:01:34.338166 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" path="/var/lib/kubelet/pods/6b0c9178-59d8-4b60-81d5-5f55bc2f97b1/volumes" Jan 21 17:02:17 crc kubenswrapper[4834]: I0121 17:02:17.205363 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:02:17 crc kubenswrapper[4834]: I0121 17:02:17.206119 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:02:47 crc kubenswrapper[4834]: I0121 17:02:47.118029 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:02:47 crc kubenswrapper[4834]: I0121 17:02:47.118587 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.416209 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v4jbr"] Jan 21 17:02:52 crc kubenswrapper[4834]: E0121 17:02:52.418094 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerName="registry-server" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.418118 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerName="registry-server" Jan 21 17:02:52 crc kubenswrapper[4834]: E0121 17:02:52.418180 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerName="extract-utilities" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.418188 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerName="extract-utilities" Jan 21 17:02:52 crc kubenswrapper[4834]: E0121 17:02:52.418226 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerName="extract-content" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.418233 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerName="extract-content" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.418707 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b0c9178-59d8-4b60-81d5-5f55bc2f97b1" containerName="registry-server" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.422833 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4jbr"] Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.422993 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.591430 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-utilities\") pod \"redhat-operators-v4jbr\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.592015 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnbmz\" (UniqueName: \"kubernetes.io/projected/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-kube-api-access-wnbmz\") pod \"redhat-operators-v4jbr\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.592097 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-catalog-content\") pod \"redhat-operators-v4jbr\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.694205 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-utilities\") pod \"redhat-operators-v4jbr\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.694309 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnbmz\" (UniqueName: \"kubernetes.io/projected/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-kube-api-access-wnbmz\") pod \"redhat-operators-v4jbr\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.694403 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-catalog-content\") pod \"redhat-operators-v4jbr\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.695234 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-catalog-content\") pod \"redhat-operators-v4jbr\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.695412 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-utilities\") pod \"redhat-operators-v4jbr\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.720945 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnbmz\" (UniqueName: \"kubernetes.io/projected/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-kube-api-access-wnbmz\") pod \"redhat-operators-v4jbr\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:52 crc kubenswrapper[4834]: I0121 17:02:52.751404 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:02:53 crc kubenswrapper[4834]: I0121 17:02:53.241441 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4jbr"] Jan 21 17:02:53 crc kubenswrapper[4834]: I0121 17:02:53.603763 4834 generic.go:334] "Generic (PLEG): container finished" podID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerID="5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779" exitCode=0 Jan 21 17:02:53 crc kubenswrapper[4834]: I0121 17:02:53.603812 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4jbr" event={"ID":"bfb3da22-f773-4bb8-a1c2-8a2608b39f20","Type":"ContainerDied","Data":"5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779"} Jan 21 17:02:53 crc kubenswrapper[4834]: I0121 17:02:53.603845 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4jbr" event={"ID":"bfb3da22-f773-4bb8-a1c2-8a2608b39f20","Type":"ContainerStarted","Data":"5f59eb611bb3bef69911a2c8c28e1cfe4346cae74a518a7ba51d344a70fe5c72"} Jan 21 17:02:55 crc kubenswrapper[4834]: I0121 17:02:55.622195 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4jbr" event={"ID":"bfb3da22-f773-4bb8-a1c2-8a2608b39f20","Type":"ContainerStarted","Data":"b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34"} Jan 21 17:02:56 crc kubenswrapper[4834]: I0121 17:02:56.635207 4834 generic.go:334] "Generic (PLEG): container finished" podID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerID="b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34" exitCode=0 Jan 21 17:02:56 crc kubenswrapper[4834]: I0121 17:02:56.635308 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4jbr" event={"ID":"bfb3da22-f773-4bb8-a1c2-8a2608b39f20","Type":"ContainerDied","Data":"b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34"} Jan 21 17:03:00 crc kubenswrapper[4834]: I0121 17:03:00.679271 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4jbr" event={"ID":"bfb3da22-f773-4bb8-a1c2-8a2608b39f20","Type":"ContainerStarted","Data":"7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c"} Jan 21 17:03:00 crc kubenswrapper[4834]: I0121 17:03:00.704154 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v4jbr" podStartSLOduration=2.8154607670000003 podStartE2EDuration="8.70413145s" podCreationTimestamp="2026-01-21 17:02:52 +0000 UTC" firstStartedPulling="2026-01-21 17:02:53.605411752 +0000 UTC m=+9119.579760797" lastFinishedPulling="2026-01-21 17:02:59.494082435 +0000 UTC m=+9125.468431480" observedRunningTime="2026-01-21 17:03:00.70319418 +0000 UTC m=+9126.677543245" watchObservedRunningTime="2026-01-21 17:03:00.70413145 +0000 UTC m=+9126.678480495" Jan 21 17:03:02 crc kubenswrapper[4834]: I0121 17:03:02.752397 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:03:02 crc kubenswrapper[4834]: I0121 17:03:02.752677 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:03:03 crc kubenswrapper[4834]: I0121 17:03:03.815579 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v4jbr" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerName="registry-server" probeResult="failure" output=< Jan 21 17:03:03 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 17:03:03 crc kubenswrapper[4834]: > Jan 21 17:03:13 crc kubenswrapper[4834]: I0121 17:03:13.047773 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:03:13 crc kubenswrapper[4834]: I0121 17:03:13.107675 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:03:13 crc kubenswrapper[4834]: I0121 17:03:13.289470 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v4jbr"] Jan 21 17:03:14 crc kubenswrapper[4834]: I0121 17:03:14.823067 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v4jbr" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerName="registry-server" containerID="cri-o://7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c" gracePeriod=2 Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.552445 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.721874 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-utilities\") pod \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.722059 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-catalog-content\") pod \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.722199 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnbmz\" (UniqueName: \"kubernetes.io/projected/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-kube-api-access-wnbmz\") pod \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\" (UID: \"bfb3da22-f773-4bb8-a1c2-8a2608b39f20\") " Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.724912 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-utilities" (OuterVolumeSpecName: "utilities") pod "bfb3da22-f773-4bb8-a1c2-8a2608b39f20" (UID: "bfb3da22-f773-4bb8-a1c2-8a2608b39f20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.730367 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-kube-api-access-wnbmz" (OuterVolumeSpecName: "kube-api-access-wnbmz") pod "bfb3da22-f773-4bb8-a1c2-8a2608b39f20" (UID: "bfb3da22-f773-4bb8-a1c2-8a2608b39f20"). InnerVolumeSpecName "kube-api-access-wnbmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.825358 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnbmz\" (UniqueName: \"kubernetes.io/projected/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-kube-api-access-wnbmz\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.825391 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.838332 4834 generic.go:334] "Generic (PLEG): container finished" podID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerID="7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c" exitCode=0 Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.838387 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4jbr" event={"ID":"bfb3da22-f773-4bb8-a1c2-8a2608b39f20","Type":"ContainerDied","Data":"7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c"} Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.838455 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4jbr" event={"ID":"bfb3da22-f773-4bb8-a1c2-8a2608b39f20","Type":"ContainerDied","Data":"5f59eb611bb3bef69911a2c8c28e1cfe4346cae74a518a7ba51d344a70fe5c72"} Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.838451 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4jbr" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.838469 4834 scope.go:117] "RemoveContainer" containerID="7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.854758 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfb3da22-f773-4bb8-a1c2-8a2608b39f20" (UID: "bfb3da22-f773-4bb8-a1c2-8a2608b39f20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.861682 4834 scope.go:117] "RemoveContainer" containerID="b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.886006 4834 scope.go:117] "RemoveContainer" containerID="5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.927574 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3da22-f773-4bb8-a1c2-8a2608b39f20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.945718 4834 scope.go:117] "RemoveContainer" containerID="7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c" Jan 21 17:03:15 crc kubenswrapper[4834]: E0121 17:03:15.946235 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c\": container with ID starting with 7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c not found: ID does not exist" containerID="7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.946290 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c"} err="failed to get container status \"7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c\": rpc error: code = NotFound desc = could not find container \"7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c\": container with ID starting with 7fcd60f12fe03b21b4f6e01f4ca57f5acda5efac80a3afa91fda6870b87fda9c not found: ID does not exist" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.946325 4834 scope.go:117] "RemoveContainer" containerID="b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34" Jan 21 17:03:15 crc kubenswrapper[4834]: E0121 17:03:15.946710 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34\": container with ID starting with b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34 not found: ID does not exist" containerID="b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.946765 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34"} err="failed to get container status \"b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34\": rpc error: code = NotFound desc = could not find container \"b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34\": container with ID starting with b382413837b7c9ad7d6ac1010c13a8162779b7919055f37315a5f102ff6bca34 not found: ID does not exist" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.946801 4834 scope.go:117] "RemoveContainer" containerID="5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779" Jan 21 17:03:15 crc kubenswrapper[4834]: E0121 17:03:15.947206 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779\": container with ID starting with 5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779 not found: ID does not exist" containerID="5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779" Jan 21 17:03:15 crc kubenswrapper[4834]: I0121 17:03:15.947244 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779"} err="failed to get container status \"5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779\": rpc error: code = NotFound desc = could not find container \"5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779\": container with ID starting with 5a3358e1e373f1722b72eddbf7e10e95183c8aa4788794ef7719cee2695da779 not found: ID does not exist" Jan 21 17:03:16 crc kubenswrapper[4834]: I0121 17:03:16.198888 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v4jbr"] Jan 21 17:03:16 crc kubenswrapper[4834]: I0121 17:03:16.212055 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v4jbr"] Jan 21 17:03:16 crc kubenswrapper[4834]: I0121 17:03:16.360549 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" path="/var/lib/kubelet/pods/bfb3da22-f773-4bb8-a1c2-8a2608b39f20/volumes" Jan 21 17:03:17 crc kubenswrapper[4834]: I0121 17:03:17.114104 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:03:17 crc kubenswrapper[4834]: I0121 17:03:17.114460 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:03:17 crc kubenswrapper[4834]: I0121 17:03:17.114514 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 17:03:17 crc kubenswrapper[4834]: I0121 17:03:17.115438 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:03:17 crc kubenswrapper[4834]: I0121 17:03:17.115498 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" gracePeriod=600 Jan 21 17:03:17 crc kubenswrapper[4834]: E0121 17:03:17.241397 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:03:17 crc kubenswrapper[4834]: I0121 17:03:17.860863 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" exitCode=0 Jan 21 17:03:17 crc kubenswrapper[4834]: I0121 17:03:17.860941 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171"} Jan 21 17:03:17 crc kubenswrapper[4834]: I0121 17:03:17.860995 4834 scope.go:117] "RemoveContainer" containerID="7a43f1496aaaa188e26f03279b52cea0cb5732647ba53a6bc24d88b146f2bcd9" Jan 21 17:03:17 crc kubenswrapper[4834]: I0121 17:03:17.861829 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:03:17 crc kubenswrapper[4834]: E0121 17:03:17.862231 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:03:31 crc kubenswrapper[4834]: I0121 17:03:31.326424 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:03:31 crc kubenswrapper[4834]: E0121 17:03:31.327416 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:03:43 crc kubenswrapper[4834]: I0121 17:03:43.325202 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:03:43 crc kubenswrapper[4834]: E0121 17:03:43.326176 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.359033 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rbslw"] Jan 21 17:03:53 crc kubenswrapper[4834]: E0121 17:03:53.360256 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerName="extract-content" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.360273 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerName="extract-content" Jan 21 17:03:53 crc kubenswrapper[4834]: E0121 17:03:53.360294 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerName="registry-server" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.360302 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerName="registry-server" Jan 21 17:03:53 crc kubenswrapper[4834]: E0121 17:03:53.360325 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerName="extract-utilities" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.360334 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerName="extract-utilities" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.360693 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb3da22-f773-4bb8-a1c2-8a2608b39f20" containerName="registry-server" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.362614 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.378474 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbslw"] Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.549292 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-utilities\") pod \"redhat-marketplace-rbslw\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.549675 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf8w5\" (UniqueName: \"kubernetes.io/projected/18e9319e-75e9-42f6-8578-1377b81726b2-kube-api-access-gf8w5\") pod \"redhat-marketplace-rbslw\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.550073 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-catalog-content\") pod \"redhat-marketplace-rbslw\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.652098 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-catalog-content\") pod \"redhat-marketplace-rbslw\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.652253 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-utilities\") pod \"redhat-marketplace-rbslw\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.652354 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf8w5\" (UniqueName: \"kubernetes.io/projected/18e9319e-75e9-42f6-8578-1377b81726b2-kube-api-access-gf8w5\") pod \"redhat-marketplace-rbslw\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.652658 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-catalog-content\") pod \"redhat-marketplace-rbslw\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.652991 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-utilities\") pod \"redhat-marketplace-rbslw\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.675565 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf8w5\" (UniqueName: \"kubernetes.io/projected/18e9319e-75e9-42f6-8578-1377b81726b2-kube-api-access-gf8w5\") pod \"redhat-marketplace-rbslw\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:53 crc kubenswrapper[4834]: I0121 17:03:53.697053 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:03:54 crc kubenswrapper[4834]: I0121 17:03:54.242349 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbslw"] Jan 21 17:03:55 crc kubenswrapper[4834]: I0121 17:03:55.307370 4834 generic.go:334] "Generic (PLEG): container finished" podID="18e9319e-75e9-42f6-8578-1377b81726b2" containerID="079fff27c325344a77eca6fb6ca61c855b72aa02d0439511db7bdabedbe94024" exitCode=0 Jan 21 17:03:55 crc kubenswrapper[4834]: I0121 17:03:55.307431 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbslw" event={"ID":"18e9319e-75e9-42f6-8578-1377b81726b2","Type":"ContainerDied","Data":"079fff27c325344a77eca6fb6ca61c855b72aa02d0439511db7bdabedbe94024"} Jan 21 17:03:55 crc kubenswrapper[4834]: I0121 17:03:55.307848 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbslw" event={"ID":"18e9319e-75e9-42f6-8578-1377b81726b2","Type":"ContainerStarted","Data":"c975a61ead56b92681a723348b8b513a2d811e38d4f5265f33f03af0f7a54751"} Jan 21 17:03:57 crc kubenswrapper[4834]: I0121 17:03:57.335788 4834 generic.go:334] "Generic (PLEG): container finished" podID="18e9319e-75e9-42f6-8578-1377b81726b2" containerID="4018a80d0dddf02340b82ddb3c19e153aeea557b5916af8c9cc042e06a2afafd" exitCode=0 Jan 21 17:03:57 crc kubenswrapper[4834]: I0121 17:03:57.335955 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbslw" event={"ID":"18e9319e-75e9-42f6-8578-1377b81726b2","Type":"ContainerDied","Data":"4018a80d0dddf02340b82ddb3c19e153aeea557b5916af8c9cc042e06a2afafd"} Jan 21 17:03:58 crc kubenswrapper[4834]: I0121 17:03:58.325221 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:03:58 crc kubenswrapper[4834]: E0121 17:03:58.326216 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:03:59 crc kubenswrapper[4834]: I0121 17:03:59.359620 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbslw" event={"ID":"18e9319e-75e9-42f6-8578-1377b81726b2","Type":"ContainerStarted","Data":"5a2b3297e6feb6865abf778c1813caa6ea5bc40392605e8cc20dc57ae380db26"} Jan 21 17:03:59 crc kubenswrapper[4834]: I0121 17:03:59.393179 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rbslw" podStartSLOduration=3.585001626 podStartE2EDuration="6.393162078s" podCreationTimestamp="2026-01-21 17:03:53 +0000 UTC" firstStartedPulling="2026-01-21 17:03:55.309402334 +0000 UTC m=+9181.283751379" lastFinishedPulling="2026-01-21 17:03:58.117562786 +0000 UTC m=+9184.091911831" observedRunningTime="2026-01-21 17:03:59.385638234 +0000 UTC m=+9185.359987279" watchObservedRunningTime="2026-01-21 17:03:59.393162078 +0000 UTC m=+9185.367511123" Jan 21 17:04:03 crc kubenswrapper[4834]: I0121 17:04:03.697402 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:04:03 crc kubenswrapper[4834]: I0121 17:04:03.697858 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:04:03 crc kubenswrapper[4834]: I0121 17:04:03.757005 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:04:04 crc kubenswrapper[4834]: I0121 17:04:04.522180 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:04:04 crc kubenswrapper[4834]: I0121 17:04:04.587649 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbslw"] Jan 21 17:04:06 crc kubenswrapper[4834]: I0121 17:04:06.489874 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rbslw" podUID="18e9319e-75e9-42f6-8578-1377b81726b2" containerName="registry-server" containerID="cri-o://5a2b3297e6feb6865abf778c1813caa6ea5bc40392605e8cc20dc57ae380db26" gracePeriod=2 Jan 21 17:04:07 crc kubenswrapper[4834]: I0121 17:04:07.504739 4834 generic.go:334] "Generic (PLEG): container finished" podID="18e9319e-75e9-42f6-8578-1377b81726b2" containerID="5a2b3297e6feb6865abf778c1813caa6ea5bc40392605e8cc20dc57ae380db26" exitCode=0 Jan 21 17:04:07 crc kubenswrapper[4834]: I0121 17:04:07.504814 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbslw" event={"ID":"18e9319e-75e9-42f6-8578-1377b81726b2","Type":"ContainerDied","Data":"5a2b3297e6feb6865abf778c1813caa6ea5bc40392605e8cc20dc57ae380db26"} Jan 21 17:04:07 crc kubenswrapper[4834]: I0121 17:04:07.838329 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.003182 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-utilities\") pod \"18e9319e-75e9-42f6-8578-1377b81726b2\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.003677 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf8w5\" (UniqueName: \"kubernetes.io/projected/18e9319e-75e9-42f6-8578-1377b81726b2-kube-api-access-gf8w5\") pod \"18e9319e-75e9-42f6-8578-1377b81726b2\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.003758 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-catalog-content\") pod \"18e9319e-75e9-42f6-8578-1377b81726b2\" (UID: \"18e9319e-75e9-42f6-8578-1377b81726b2\") " Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.004387 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-utilities" (OuterVolumeSpecName: "utilities") pod "18e9319e-75e9-42f6-8578-1377b81726b2" (UID: "18e9319e-75e9-42f6-8578-1377b81726b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.005311 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.011300 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e9319e-75e9-42f6-8578-1377b81726b2-kube-api-access-gf8w5" (OuterVolumeSpecName: "kube-api-access-gf8w5") pod "18e9319e-75e9-42f6-8578-1377b81726b2" (UID: "18e9319e-75e9-42f6-8578-1377b81726b2"). InnerVolumeSpecName "kube-api-access-gf8w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.029729 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18e9319e-75e9-42f6-8578-1377b81726b2" (UID: "18e9319e-75e9-42f6-8578-1377b81726b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.107000 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf8w5\" (UniqueName: \"kubernetes.io/projected/18e9319e-75e9-42f6-8578-1377b81726b2-kube-api-access-gf8w5\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.107033 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18e9319e-75e9-42f6-8578-1377b81726b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.519798 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbslw" event={"ID":"18e9319e-75e9-42f6-8578-1377b81726b2","Type":"ContainerDied","Data":"c975a61ead56b92681a723348b8b513a2d811e38d4f5265f33f03af0f7a54751"} Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.519873 4834 scope.go:117] "RemoveContainer" containerID="5a2b3297e6feb6865abf778c1813caa6ea5bc40392605e8cc20dc57ae380db26" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.520162 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbslw" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.554305 4834 scope.go:117] "RemoveContainer" containerID="4018a80d0dddf02340b82ddb3c19e153aeea557b5916af8c9cc042e06a2afafd" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.556912 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbslw"] Jan 21 17:04:08 crc kubenswrapper[4834]: E0121 17:04:08.567126 4834 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e9319e_75e9_42f6_8578_1377b81726b2.slice/crio-c975a61ead56b92681a723348b8b513a2d811e38d4f5265f33f03af0f7a54751\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e9319e_75e9_42f6_8578_1377b81726b2.slice\": RecentStats: unable to find data in memory cache]" Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.575795 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbslw"] Jan 21 17:04:08 crc kubenswrapper[4834]: I0121 17:04:08.581736 4834 scope.go:117] "RemoveContainer" containerID="079fff27c325344a77eca6fb6ca61c855b72aa02d0439511db7bdabedbe94024" Jan 21 17:04:10 crc kubenswrapper[4834]: I0121 17:04:10.338997 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e9319e-75e9-42f6-8578-1377b81726b2" path="/var/lib/kubelet/pods/18e9319e-75e9-42f6-8578-1377b81726b2/volumes" Jan 21 17:04:12 crc kubenswrapper[4834]: I0121 17:04:12.325598 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:04:12 crc kubenswrapper[4834]: E0121 17:04:12.326094 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:04:25 crc kubenswrapper[4834]: I0121 17:04:25.326021 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:04:25 crc kubenswrapper[4834]: E0121 17:04:25.326991 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:04:38 crc kubenswrapper[4834]: I0121 17:04:38.330431 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:04:38 crc kubenswrapper[4834]: E0121 17:04:38.331150 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:04:53 crc kubenswrapper[4834]: I0121 17:04:53.325018 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:04:53 crc kubenswrapper[4834]: E0121 17:04:53.325848 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:05:08 crc kubenswrapper[4834]: I0121 17:05:08.330348 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:05:08 crc kubenswrapper[4834]: E0121 17:05:08.331258 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:05:23 crc kubenswrapper[4834]: I0121 17:05:23.326699 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:05:23 crc kubenswrapper[4834]: E0121 17:05:23.327853 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:05:27 crc kubenswrapper[4834]: I0121 17:05:27.551280 4834 trace.go:236] Trace[1516831419]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-1" (21-Jan-2026 17:05:26.436) (total time: 1114ms): Jan 21 17:05:27 crc kubenswrapper[4834]: Trace[1516831419]: [1.114340234s] [1.114340234s] END Jan 21 17:05:34 crc kubenswrapper[4834]: I0121 17:05:34.331405 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:05:34 crc kubenswrapper[4834]: E0121 17:05:34.336074 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:05:46 crc kubenswrapper[4834]: I0121 17:05:46.329689 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:05:46 crc kubenswrapper[4834]: E0121 17:05:46.330671 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:05:57 crc kubenswrapper[4834]: I0121 17:05:57.325192 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:05:57 crc kubenswrapper[4834]: E0121 17:05:57.326427 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:06:11 crc kubenswrapper[4834]: I0121 17:06:11.326117 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:06:11 crc kubenswrapper[4834]: E0121 17:06:11.327280 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:06:22 crc kubenswrapper[4834]: I0121 17:06:22.324838 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:06:22 crc kubenswrapper[4834]: E0121 17:06:22.325873 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:06:36 crc kubenswrapper[4834]: I0121 17:06:36.328280 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:06:36 crc kubenswrapper[4834]: E0121 17:06:36.329136 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.389194 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pjtjv"] Jan 21 17:06:38 crc kubenswrapper[4834]: E0121 17:06:38.390054 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e9319e-75e9-42f6-8578-1377b81726b2" containerName="extract-utilities" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.390069 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e9319e-75e9-42f6-8578-1377b81726b2" containerName="extract-utilities" Jan 21 17:06:38 crc kubenswrapper[4834]: E0121 17:06:38.390097 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e9319e-75e9-42f6-8578-1377b81726b2" containerName="extract-content" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.390103 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e9319e-75e9-42f6-8578-1377b81726b2" containerName="extract-content" Jan 21 17:06:38 crc kubenswrapper[4834]: E0121 17:06:38.390136 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e9319e-75e9-42f6-8578-1377b81726b2" containerName="registry-server" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.390142 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e9319e-75e9-42f6-8578-1377b81726b2" containerName="registry-server" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.390367 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e9319e-75e9-42f6-8578-1377b81726b2" containerName="registry-server" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.392379 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.401768 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pjtjv"] Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.431494 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fbzh\" (UniqueName: \"kubernetes.io/projected/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-kube-api-access-8fbzh\") pod \"certified-operators-pjtjv\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.431985 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-catalog-content\") pod \"certified-operators-pjtjv\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.432283 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-utilities\") pod \"certified-operators-pjtjv\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.535027 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-catalog-content\") pod \"certified-operators-pjtjv\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.535442 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-utilities\") pod \"certified-operators-pjtjv\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.535607 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fbzh\" (UniqueName: \"kubernetes.io/projected/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-kube-api-access-8fbzh\") pod \"certified-operators-pjtjv\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.535793 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-catalog-content\") pod \"certified-operators-pjtjv\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.536356 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-utilities\") pod \"certified-operators-pjtjv\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.558088 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fbzh\" (UniqueName: \"kubernetes.io/projected/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-kube-api-access-8fbzh\") pod \"certified-operators-pjtjv\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:38 crc kubenswrapper[4834]: I0121 17:06:38.735120 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:39 crc kubenswrapper[4834]: I0121 17:06:39.361213 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pjtjv"] Jan 21 17:06:40 crc kubenswrapper[4834]: I0121 17:06:40.384664 4834 generic.go:334] "Generic (PLEG): container finished" podID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerID="7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356" exitCode=0 Jan 21 17:06:40 crc kubenswrapper[4834]: I0121 17:06:40.385248 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjtjv" event={"ID":"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20","Type":"ContainerDied","Data":"7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356"} Jan 21 17:06:40 crc kubenswrapper[4834]: I0121 17:06:40.385277 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjtjv" event={"ID":"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20","Type":"ContainerStarted","Data":"16f0e26a7a625aa3c7604887c6a1a99d260072ea16d4c3a7f1078103e7edbd3d"} Jan 21 17:06:40 crc kubenswrapper[4834]: I0121 17:06:40.389158 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:06:42 crc kubenswrapper[4834]: I0121 17:06:42.425008 4834 generic.go:334] "Generic (PLEG): container finished" podID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerID="feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7" exitCode=0 Jan 21 17:06:42 crc kubenswrapper[4834]: I0121 17:06:42.425156 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjtjv" event={"ID":"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20","Type":"ContainerDied","Data":"feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7"} Jan 21 17:06:43 crc kubenswrapper[4834]: I0121 17:06:43.436668 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjtjv" event={"ID":"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20","Type":"ContainerStarted","Data":"b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c"} Jan 21 17:06:43 crc kubenswrapper[4834]: I0121 17:06:43.468147 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pjtjv" podStartSLOduration=2.999534115 podStartE2EDuration="5.468123668s" podCreationTimestamp="2026-01-21 17:06:38 +0000 UTC" firstStartedPulling="2026-01-21 17:06:40.388918068 +0000 UTC m=+9346.363267103" lastFinishedPulling="2026-01-21 17:06:42.857507611 +0000 UTC m=+9348.831856656" observedRunningTime="2026-01-21 17:06:43.459080635 +0000 UTC m=+9349.433429690" watchObservedRunningTime="2026-01-21 17:06:43.468123668 +0000 UTC m=+9349.442472723" Jan 21 17:06:48 crc kubenswrapper[4834]: I0121 17:06:48.736276 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:48 crc kubenswrapper[4834]: I0121 17:06:48.738200 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:49 crc kubenswrapper[4834]: I0121 17:06:49.067339 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:49 crc kubenswrapper[4834]: I0121 17:06:49.325726 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:06:49 crc kubenswrapper[4834]: E0121 17:06:49.326251 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:06:49 crc kubenswrapper[4834]: I0121 17:06:49.550866 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:49 crc kubenswrapper[4834]: I0121 17:06:49.607427 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pjtjv"] Jan 21 17:06:51 crc kubenswrapper[4834]: I0121 17:06:51.517886 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pjtjv" podUID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerName="registry-server" containerID="cri-o://b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c" gracePeriod=2 Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.091314 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.189818 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-catalog-content\") pod \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.190140 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fbzh\" (UniqueName: \"kubernetes.io/projected/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-kube-api-access-8fbzh\") pod \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.190284 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-utilities\") pod \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\" (UID: \"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20\") " Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.191467 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-utilities" (OuterVolumeSpecName: "utilities") pod "8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" (UID: "8e9ae415-ff51-4a3e-b60b-f4430ed5ba20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.197111 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-kube-api-access-8fbzh" (OuterVolumeSpecName: "kube-api-access-8fbzh") pod "8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" (UID: "8e9ae415-ff51-4a3e-b60b-f4430ed5ba20"). InnerVolumeSpecName "kube-api-access-8fbzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.244440 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" (UID: "8e9ae415-ff51-4a3e-b60b-f4430ed5ba20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.293258 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.293299 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.293318 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fbzh\" (UniqueName: \"kubernetes.io/projected/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20-kube-api-access-8fbzh\") on node \"crc\" DevicePath \"\"" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.529127 4834 generic.go:334] "Generic (PLEG): container finished" podID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerID="b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c" exitCode=0 Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.529177 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjtjv" event={"ID":"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20","Type":"ContainerDied","Data":"b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c"} Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.529208 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pjtjv" event={"ID":"8e9ae415-ff51-4a3e-b60b-f4430ed5ba20","Type":"ContainerDied","Data":"16f0e26a7a625aa3c7604887c6a1a99d260072ea16d4c3a7f1078103e7edbd3d"} Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.529230 4834 scope.go:117] "RemoveContainer" containerID="b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.529366 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pjtjv" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.561148 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pjtjv"] Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.564436 4834 scope.go:117] "RemoveContainer" containerID="feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.571378 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pjtjv"] Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.599979 4834 scope.go:117] "RemoveContainer" containerID="7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.635530 4834 scope.go:117] "RemoveContainer" containerID="b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c" Jan 21 17:06:52 crc kubenswrapper[4834]: E0121 17:06:52.636110 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c\": container with ID starting with b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c not found: ID does not exist" containerID="b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.636242 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c"} err="failed to get container status \"b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c\": rpc error: code = NotFound desc = could not find container \"b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c\": container with ID starting with b9f5882f0dc118937be66f6607cf277c6c00e3654990102bb000197ae92b673c not found: ID does not exist" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.636354 4834 scope.go:117] "RemoveContainer" containerID="feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7" Jan 21 17:06:52 crc kubenswrapper[4834]: E0121 17:06:52.637182 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7\": container with ID starting with feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7 not found: ID does not exist" containerID="feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.637216 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7"} err="failed to get container status \"feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7\": rpc error: code = NotFound desc = could not find container \"feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7\": container with ID starting with feb6cd094029b655affb884cd956545eb0ad2dc2d8e8e1ba2bb83b3013a0cbd7 not found: ID does not exist" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.637242 4834 scope.go:117] "RemoveContainer" containerID="7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356" Jan 21 17:06:52 crc kubenswrapper[4834]: E0121 17:06:52.637471 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356\": container with ID starting with 7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356 not found: ID does not exist" containerID="7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356" Jan 21 17:06:52 crc kubenswrapper[4834]: I0121 17:06:52.637495 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356"} err="failed to get container status \"7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356\": rpc error: code = NotFound desc = could not find container \"7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356\": container with ID starting with 7f49481d102ca605fb21dd4d6c4be4f3320ea2997f8292780e213baa8a003356 not found: ID does not exist" Jan 21 17:06:54 crc kubenswrapper[4834]: I0121 17:06:54.360656 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" path="/var/lib/kubelet/pods/8e9ae415-ff51-4a3e-b60b-f4430ed5ba20/volumes" Jan 21 17:07:04 crc kubenswrapper[4834]: I0121 17:07:04.326255 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:07:04 crc kubenswrapper[4834]: E0121 17:07:04.326947 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:07:17 crc kubenswrapper[4834]: I0121 17:07:17.324965 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:07:17 crc kubenswrapper[4834]: E0121 17:07:17.325944 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:07:30 crc kubenswrapper[4834]: I0121 17:07:30.324786 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:07:30 crc kubenswrapper[4834]: E0121 17:07:30.325682 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:07:44 crc kubenswrapper[4834]: I0121 17:07:44.331504 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:07:44 crc kubenswrapper[4834]: E0121 17:07:44.332244 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:07:57 crc kubenswrapper[4834]: I0121 17:07:57.325740 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:07:57 crc kubenswrapper[4834]: E0121 17:07:57.326373 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:08:12 crc kubenswrapper[4834]: I0121 17:08:12.336031 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:08:12 crc kubenswrapper[4834]: E0121 17:08:12.337078 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:08:17 crc kubenswrapper[4834]: I0121 17:08:17.479302 4834 scope.go:117] "RemoveContainer" containerID="f08667e4b8beb334b3e4d1cb2bdb3c110d3423d239bd28d0ebbc951c8b5c046b" Jan 21 17:08:17 crc kubenswrapper[4834]: I0121 17:08:17.502935 4834 scope.go:117] "RemoveContainer" containerID="c0593c282d23bec8b0fde83bc8b72352ec9fd063f61cd79c7a5dd86964020284" Jan 21 17:08:17 crc kubenswrapper[4834]: I0121 17:08:17.533414 4834 scope.go:117] "RemoveContainer" containerID="b1398e1825ac4feb90c1e4ec98a1a53e22478cada8824024a908ee0dc0c3db54" Jan 21 17:08:26 crc kubenswrapper[4834]: I0121 17:08:26.325030 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:08:27 crc kubenswrapper[4834]: I0121 17:08:27.552527 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"9b1902e003bed1592c629b84b52f304e4bacb3a409ec3af4c9380972e3f1792d"} Jan 21 17:10:47 crc kubenswrapper[4834]: I0121 17:10:47.113545 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:10:47 crc kubenswrapper[4834]: I0121 17:10:47.114178 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:11:17 crc kubenswrapper[4834]: I0121 17:11:17.115587 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:11:17 crc kubenswrapper[4834]: I0121 17:11:17.118238 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:11:47 crc kubenswrapper[4834]: I0121 17:11:47.113908 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:11:47 crc kubenswrapper[4834]: I0121 17:11:47.116748 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:11:47 crc kubenswrapper[4834]: I0121 17:11:47.117100 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 17:11:47 crc kubenswrapper[4834]: I0121 17:11:47.118532 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b1902e003bed1592c629b84b52f304e4bacb3a409ec3af4c9380972e3f1792d"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:11:47 crc kubenswrapper[4834]: I0121 17:11:47.118821 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://9b1902e003bed1592c629b84b52f304e4bacb3a409ec3af4c9380972e3f1792d" gracePeriod=600 Jan 21 17:11:48 crc kubenswrapper[4834]: I0121 17:11:48.651013 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="9b1902e003bed1592c629b84b52f304e4bacb3a409ec3af4c9380972e3f1792d" exitCode=0 Jan 21 17:11:48 crc kubenswrapper[4834]: I0121 17:11:48.651377 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"9b1902e003bed1592c629b84b52f304e4bacb3a409ec3af4c9380972e3f1792d"} Jan 21 17:11:48 crc kubenswrapper[4834]: I0121 17:11:48.651458 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f"} Jan 21 17:11:48 crc kubenswrapper[4834]: I0121 17:11:48.651479 4834 scope.go:117] "RemoveContainer" containerID="ede2bca1f23880c96e0a59befe979cbb3dc60803eb4424a14d221c8586594171" Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.912493 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rnntr"] Jan 21 17:12:18 crc kubenswrapper[4834]: E0121 17:12:18.913529 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerName="registry-server" Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.913612 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerName="registry-server" Jan 21 17:12:18 crc kubenswrapper[4834]: E0121 17:12:18.913625 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerName="extract-content" Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.913631 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerName="extract-content" Jan 21 17:12:18 crc kubenswrapper[4834]: E0121 17:12:18.913674 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerName="extract-utilities" Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.913682 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerName="extract-utilities" Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.913884 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9ae415-ff51-4a3e-b60b-f4430ed5ba20" containerName="registry-server" Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.915549 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.929856 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnntr"] Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.962539 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-catalog-content\") pod \"community-operators-rnntr\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.962672 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnpzr\" (UniqueName: \"kubernetes.io/projected/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-kube-api-access-xnpzr\") pod \"community-operators-rnntr\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:18 crc kubenswrapper[4834]: I0121 17:12:18.963027 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-utilities\") pod \"community-operators-rnntr\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:19 crc kubenswrapper[4834]: I0121 17:12:19.065848 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-utilities\") pod \"community-operators-rnntr\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:19 crc kubenswrapper[4834]: I0121 17:12:19.066066 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-catalog-content\") pod \"community-operators-rnntr\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:19 crc kubenswrapper[4834]: I0121 17:12:19.066113 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnpzr\" (UniqueName: \"kubernetes.io/projected/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-kube-api-access-xnpzr\") pod \"community-operators-rnntr\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:19 crc kubenswrapper[4834]: I0121 17:12:19.066787 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-utilities\") pod \"community-operators-rnntr\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:19 crc kubenswrapper[4834]: I0121 17:12:19.066796 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-catalog-content\") pod \"community-operators-rnntr\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:19 crc kubenswrapper[4834]: I0121 17:12:19.089275 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnpzr\" (UniqueName: \"kubernetes.io/projected/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-kube-api-access-xnpzr\") pod \"community-operators-rnntr\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:19 crc kubenswrapper[4834]: I0121 17:12:19.233641 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:19 crc kubenswrapper[4834]: I0121 17:12:19.832721 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnntr"] Jan 21 17:12:19 crc kubenswrapper[4834]: W0121 17:12:19.838386 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39e8eef0_8b8a_4db5_a2e0_fee547ee3e4a.slice/crio-c0d9085046ca319a538071c5f640a429f0c87df332bdc20abcfb77b8940ebad4 WatchSource:0}: Error finding container c0d9085046ca319a538071c5f640a429f0c87df332bdc20abcfb77b8940ebad4: Status 404 returned error can't find the container with id c0d9085046ca319a538071c5f640a429f0c87df332bdc20abcfb77b8940ebad4 Jan 21 17:12:19 crc kubenswrapper[4834]: I0121 17:12:19.982748 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnntr" event={"ID":"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a","Type":"ContainerStarted","Data":"c0d9085046ca319a538071c5f640a429f0c87df332bdc20abcfb77b8940ebad4"} Jan 21 17:12:20 crc kubenswrapper[4834]: I0121 17:12:20.992998 4834 generic.go:334] "Generic (PLEG): container finished" podID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerID="f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d" exitCode=0 Jan 21 17:12:20 crc kubenswrapper[4834]: I0121 17:12:20.993087 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnntr" event={"ID":"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a","Type":"ContainerDied","Data":"f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d"} Jan 21 17:12:20 crc kubenswrapper[4834]: I0121 17:12:20.996124 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:12:22 crc kubenswrapper[4834]: I0121 17:12:22.005145 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnntr" event={"ID":"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a","Type":"ContainerStarted","Data":"b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90"} Jan 21 17:12:23 crc kubenswrapper[4834]: I0121 17:12:23.016177 4834 generic.go:334] "Generic (PLEG): container finished" podID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerID="b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90" exitCode=0 Jan 21 17:12:23 crc kubenswrapper[4834]: I0121 17:12:23.016296 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnntr" event={"ID":"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a","Type":"ContainerDied","Data":"b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90"} Jan 21 17:12:24 crc kubenswrapper[4834]: I0121 17:12:24.034711 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnntr" event={"ID":"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a","Type":"ContainerStarted","Data":"e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb"} Jan 21 17:12:24 crc kubenswrapper[4834]: I0121 17:12:24.062413 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rnntr" podStartSLOduration=3.403036528 podStartE2EDuration="6.062389163s" podCreationTimestamp="2026-01-21 17:12:18 +0000 UTC" firstStartedPulling="2026-01-21 17:12:20.99583864 +0000 UTC m=+9686.970187685" lastFinishedPulling="2026-01-21 17:12:23.655191275 +0000 UTC m=+9689.629540320" observedRunningTime="2026-01-21 17:12:24.055353542 +0000 UTC m=+9690.029702597" watchObservedRunningTime="2026-01-21 17:12:24.062389163 +0000 UTC m=+9690.036738208" Jan 21 17:12:29 crc kubenswrapper[4834]: I0121 17:12:29.233952 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:29 crc kubenswrapper[4834]: I0121 17:12:29.236068 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:29 crc kubenswrapper[4834]: I0121 17:12:29.303153 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:30 crc kubenswrapper[4834]: I0121 17:12:30.169794 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:30 crc kubenswrapper[4834]: I0121 17:12:30.224683 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnntr"] Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.127454 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rnntr" podUID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerName="registry-server" containerID="cri-o://e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb" gracePeriod=2 Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.594312 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.623641 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnpzr\" (UniqueName: \"kubernetes.io/projected/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-kube-api-access-xnpzr\") pod \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.623701 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-utilities\") pod \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.623845 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-catalog-content\") pod \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\" (UID: \"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a\") " Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.628118 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-utilities" (OuterVolumeSpecName: "utilities") pod "39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" (UID: "39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.634848 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-kube-api-access-xnpzr" (OuterVolumeSpecName: "kube-api-access-xnpzr") pod "39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" (UID: "39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a"). InnerVolumeSpecName "kube-api-access-xnpzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.687565 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" (UID: "39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.726812 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnpzr\" (UniqueName: \"kubernetes.io/projected/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-kube-api-access-xnpzr\") on node \"crc\" DevicePath \"\"" Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.726879 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:12:32 crc kubenswrapper[4834]: I0121 17:12:32.726893 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.139100 4834 generic.go:334] "Generic (PLEG): container finished" podID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerID="e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb" exitCode=0 Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.139172 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnntr" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.139159 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnntr" event={"ID":"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a","Type":"ContainerDied","Data":"e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb"} Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.139236 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnntr" event={"ID":"39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a","Type":"ContainerDied","Data":"c0d9085046ca319a538071c5f640a429f0c87df332bdc20abcfb77b8940ebad4"} Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.139328 4834 scope.go:117] "RemoveContainer" containerID="e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.169417 4834 scope.go:117] "RemoveContainer" containerID="b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.188839 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnntr"] Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.195659 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rnntr"] Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.213629 4834 scope.go:117] "RemoveContainer" containerID="f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.254168 4834 scope.go:117] "RemoveContainer" containerID="e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb" Jan 21 17:12:33 crc kubenswrapper[4834]: E0121 17:12:33.260052 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb\": container with ID starting with e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb not found: ID does not exist" containerID="e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.260104 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb"} err="failed to get container status \"e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb\": rpc error: code = NotFound desc = could not find container \"e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb\": container with ID starting with e26e59cb270a239d1d32bc67a66ac9180652079ef1a05d31a4bc38e6ca5a07fb not found: ID does not exist" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.260134 4834 scope.go:117] "RemoveContainer" containerID="b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90" Jan 21 17:12:33 crc kubenswrapper[4834]: E0121 17:12:33.261223 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90\": container with ID starting with b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90 not found: ID does not exist" containerID="b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.261252 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90"} err="failed to get container status \"b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90\": rpc error: code = NotFound desc = could not find container \"b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90\": container with ID starting with b0021c0da994f26720063a9f335f41dc31a7edb8a57e4c3cc3c44a8f478aec90 not found: ID does not exist" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.261271 4834 scope.go:117] "RemoveContainer" containerID="f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d" Jan 21 17:12:33 crc kubenswrapper[4834]: E0121 17:12:33.261899 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d\": container with ID starting with f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d not found: ID does not exist" containerID="f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d" Jan 21 17:12:33 crc kubenswrapper[4834]: I0121 17:12:33.262063 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d"} err="failed to get container status \"f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d\": rpc error: code = NotFound desc = could not find container \"f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d\": container with ID starting with f42cd5d64b20af55c552b1158906cbec2e6d6080ea8aa5d44db5344309a0ce3d not found: ID does not exist" Jan 21 17:12:34 crc kubenswrapper[4834]: I0121 17:12:34.338463 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" path="/var/lib/kubelet/pods/39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a/volumes" Jan 21 17:13:17 crc kubenswrapper[4834]: I0121 17:13:17.903641 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w5xz9"] Jan 21 17:13:17 crc kubenswrapper[4834]: E0121 17:13:17.905436 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerName="registry-server" Jan 21 17:13:17 crc kubenswrapper[4834]: I0121 17:13:17.905461 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerName="registry-server" Jan 21 17:13:17 crc kubenswrapper[4834]: E0121 17:13:17.905484 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerName="extract-utilities" Jan 21 17:13:17 crc kubenswrapper[4834]: I0121 17:13:17.905492 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerName="extract-utilities" Jan 21 17:13:17 crc kubenswrapper[4834]: E0121 17:13:17.905531 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerName="extract-content" Jan 21 17:13:17 crc kubenswrapper[4834]: I0121 17:13:17.905541 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerName="extract-content" Jan 21 17:13:17 crc kubenswrapper[4834]: I0121 17:13:17.905799 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e8eef0-8b8a-4db5-a2e0-fee547ee3e4a" containerName="registry-server" Jan 21 17:13:17 crc kubenswrapper[4834]: I0121 17:13:17.912080 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:17 crc kubenswrapper[4834]: I0121 17:13:17.938730 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w5xz9"] Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.023807 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-utilities\") pod \"redhat-operators-w5xz9\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.023949 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz2hm\" (UniqueName: \"kubernetes.io/projected/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-kube-api-access-xz2hm\") pod \"redhat-operators-w5xz9\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.024009 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-catalog-content\") pod \"redhat-operators-w5xz9\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.125519 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz2hm\" (UniqueName: \"kubernetes.io/projected/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-kube-api-access-xz2hm\") pod \"redhat-operators-w5xz9\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.125640 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-catalog-content\") pod \"redhat-operators-w5xz9\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.125720 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-utilities\") pod \"redhat-operators-w5xz9\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.126471 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-utilities\") pod \"redhat-operators-w5xz9\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.126730 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-catalog-content\") pod \"redhat-operators-w5xz9\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.148724 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz2hm\" (UniqueName: \"kubernetes.io/projected/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-kube-api-access-xz2hm\") pod \"redhat-operators-w5xz9\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.242450 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:18 crc kubenswrapper[4834]: I0121 17:13:18.864985 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w5xz9"] Jan 21 17:13:19 crc kubenswrapper[4834]: I0121 17:13:19.647884 4834 generic.go:334] "Generic (PLEG): container finished" podID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerID="c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907" exitCode=0 Jan 21 17:13:19 crc kubenswrapper[4834]: I0121 17:13:19.648179 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5xz9" event={"ID":"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d","Type":"ContainerDied","Data":"c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907"} Jan 21 17:13:19 crc kubenswrapper[4834]: I0121 17:13:19.648207 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5xz9" event={"ID":"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d","Type":"ContainerStarted","Data":"b784b48c2be137803382ea1e06bb6b013dfded15306a5463adc5f9fd61964da1"} Jan 21 17:13:21 crc kubenswrapper[4834]: I0121 17:13:21.680045 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5xz9" event={"ID":"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d","Type":"ContainerStarted","Data":"7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef"} Jan 21 17:13:23 crc kubenswrapper[4834]: I0121 17:13:23.705967 4834 generic.go:334] "Generic (PLEG): container finished" podID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerID="7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef" exitCode=0 Jan 21 17:13:23 crc kubenswrapper[4834]: I0121 17:13:23.706087 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5xz9" event={"ID":"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d","Type":"ContainerDied","Data":"7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef"} Jan 21 17:13:25 crc kubenswrapper[4834]: I0121 17:13:25.734653 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5xz9" event={"ID":"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d","Type":"ContainerStarted","Data":"1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a"} Jan 21 17:13:25 crc kubenswrapper[4834]: I0121 17:13:25.755681 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w5xz9" podStartSLOduration=3.7668951550000003 podStartE2EDuration="8.755648524s" podCreationTimestamp="2026-01-21 17:13:17 +0000 UTC" firstStartedPulling="2026-01-21 17:13:19.649808373 +0000 UTC m=+9745.624157418" lastFinishedPulling="2026-01-21 17:13:24.638561732 +0000 UTC m=+9750.612910787" observedRunningTime="2026-01-21 17:13:25.752261008 +0000 UTC m=+9751.726610063" watchObservedRunningTime="2026-01-21 17:13:25.755648524 +0000 UTC m=+9751.729997599" Jan 21 17:13:28 crc kubenswrapper[4834]: I0121 17:13:28.242735 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:28 crc kubenswrapper[4834]: I0121 17:13:28.243998 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:29 crc kubenswrapper[4834]: I0121 17:13:29.292576 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w5xz9" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerName="registry-server" probeResult="failure" output=< Jan 21 17:13:29 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 17:13:29 crc kubenswrapper[4834]: > Jan 21 17:13:38 crc kubenswrapper[4834]: I0121 17:13:38.306087 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:38 crc kubenswrapper[4834]: I0121 17:13:38.366860 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:38 crc kubenswrapper[4834]: I0121 17:13:38.557378 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w5xz9"] Jan 21 17:13:39 crc kubenswrapper[4834]: I0121 17:13:39.888867 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w5xz9" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerName="registry-server" containerID="cri-o://1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a" gracePeriod=2 Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.437325 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.576103 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz2hm\" (UniqueName: \"kubernetes.io/projected/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-kube-api-access-xz2hm\") pod \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.576518 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-catalog-content\") pod \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.576629 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-utilities\") pod \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\" (UID: \"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d\") " Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.577541 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-utilities" (OuterVolumeSpecName: "utilities") pod "a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" (UID: "a3d70959-a9a5-4a7d-8cd4-1d02fba9812d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.592662 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-kube-api-access-xz2hm" (OuterVolumeSpecName: "kube-api-access-xz2hm") pod "a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" (UID: "a3d70959-a9a5-4a7d-8cd4-1d02fba9812d"). InnerVolumeSpecName "kube-api-access-xz2hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.679607 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz2hm\" (UniqueName: \"kubernetes.io/projected/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-kube-api-access-xz2hm\") on node \"crc\" DevicePath \"\"" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.679669 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.698395 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" (UID: "a3d70959-a9a5-4a7d-8cd4-1d02fba9812d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.781284 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.901736 4834 generic.go:334] "Generic (PLEG): container finished" podID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerID="1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a" exitCode=0 Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.901800 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5xz9" event={"ID":"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d","Type":"ContainerDied","Data":"1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a"} Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.901849 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5xz9" event={"ID":"a3d70959-a9a5-4a7d-8cd4-1d02fba9812d","Type":"ContainerDied","Data":"b784b48c2be137803382ea1e06bb6b013dfded15306a5463adc5f9fd61964da1"} Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.901853 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5xz9" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.901878 4834 scope.go:117] "RemoveContainer" containerID="1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.922714 4834 scope.go:117] "RemoveContainer" containerID="7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef" Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.951900 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w5xz9"] Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.963879 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w5xz9"] Jan 21 17:13:40 crc kubenswrapper[4834]: I0121 17:13:40.964115 4834 scope.go:117] "RemoveContainer" containerID="c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907" Jan 21 17:13:41 crc kubenswrapper[4834]: I0121 17:13:41.015068 4834 scope.go:117] "RemoveContainer" containerID="1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a" Jan 21 17:13:41 crc kubenswrapper[4834]: E0121 17:13:41.016433 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a\": container with ID starting with 1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a not found: ID does not exist" containerID="1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a" Jan 21 17:13:41 crc kubenswrapper[4834]: I0121 17:13:41.016481 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a"} err="failed to get container status \"1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a\": rpc error: code = NotFound desc = could not find container \"1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a\": container with ID starting with 1c6022852014dce292a98f25f70aae1697f15d881ed2c046c6badd4138d7d65a not found: ID does not exist" Jan 21 17:13:41 crc kubenswrapper[4834]: I0121 17:13:41.016515 4834 scope.go:117] "RemoveContainer" containerID="7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef" Jan 21 17:13:41 crc kubenswrapper[4834]: E0121 17:13:41.016910 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef\": container with ID starting with 7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef not found: ID does not exist" containerID="7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef" Jan 21 17:13:41 crc kubenswrapper[4834]: I0121 17:13:41.016949 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef"} err="failed to get container status \"7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef\": rpc error: code = NotFound desc = could not find container \"7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef\": container with ID starting with 7425ec8dda01d8904d9f518fe43df68382eaec20791bbff92541c90be57566ef not found: ID does not exist" Jan 21 17:13:41 crc kubenswrapper[4834]: I0121 17:13:41.016966 4834 scope.go:117] "RemoveContainer" containerID="c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907" Jan 21 17:13:41 crc kubenswrapper[4834]: E0121 17:13:41.017188 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907\": container with ID starting with c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907 not found: ID does not exist" containerID="c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907" Jan 21 17:13:41 crc kubenswrapper[4834]: I0121 17:13:41.017209 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907"} err="failed to get container status \"c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907\": rpc error: code = NotFound desc = could not find container \"c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907\": container with ID starting with c6d902fd1813f2346ec67468bccb353c98c5f7079d45c4b597d5f4554c925907 not found: ID does not exist" Jan 21 17:13:42 crc kubenswrapper[4834]: I0121 17:13:42.339278 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" path="/var/lib/kubelet/pods/a3d70959-a9a5-4a7d-8cd4-1d02fba9812d/volumes" Jan 21 17:14:17 crc kubenswrapper[4834]: I0121 17:14:17.118457 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:14:17 crc kubenswrapper[4834]: I0121 17:14:17.119019 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:14:47 crc kubenswrapper[4834]: I0121 17:14:47.114247 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:14:47 crc kubenswrapper[4834]: I0121 17:14:47.114833 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.111561 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-scm4f"] Jan 21 17:14:50 crc kubenswrapper[4834]: E0121 17:14:50.112536 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerName="registry-server" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.112556 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerName="registry-server" Jan 21 17:14:50 crc kubenswrapper[4834]: E0121 17:14:50.112572 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerName="extract-content" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.112580 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerName="extract-content" Jan 21 17:14:50 crc kubenswrapper[4834]: E0121 17:14:50.112601 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerName="extract-utilities" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.112609 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerName="extract-utilities" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.112983 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d70959-a9a5-4a7d-8cd4-1d02fba9812d" containerName="registry-server" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.117132 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.124568 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-scm4f"] Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.191941 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j758j\" (UniqueName: \"kubernetes.io/projected/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-kube-api-access-j758j\") pod \"redhat-marketplace-scm4f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.192058 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-catalog-content\") pod \"redhat-marketplace-scm4f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.192090 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-utilities\") pod \"redhat-marketplace-scm4f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.294025 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-catalog-content\") pod \"redhat-marketplace-scm4f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.294313 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-utilities\") pod \"redhat-marketplace-scm4f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.294597 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-catalog-content\") pod \"redhat-marketplace-scm4f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.294678 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-utilities\") pod \"redhat-marketplace-scm4f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.294809 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j758j\" (UniqueName: \"kubernetes.io/projected/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-kube-api-access-j758j\") pod \"redhat-marketplace-scm4f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.314788 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j758j\" (UniqueName: \"kubernetes.io/projected/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-kube-api-access-j758j\") pod \"redhat-marketplace-scm4f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.449225 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:14:50 crc kubenswrapper[4834]: I0121 17:14:50.975623 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-scm4f"] Jan 21 17:14:51 crc kubenswrapper[4834]: I0121 17:14:51.701013 4834 generic.go:334] "Generic (PLEG): container finished" podID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerID="82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995" exitCode=0 Jan 21 17:14:51 crc kubenswrapper[4834]: I0121 17:14:51.701111 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scm4f" event={"ID":"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f","Type":"ContainerDied","Data":"82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995"} Jan 21 17:14:51 crc kubenswrapper[4834]: I0121 17:14:51.701353 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scm4f" event={"ID":"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f","Type":"ContainerStarted","Data":"028bb9cdf085a42e8f3042c6ea3992956fb14ca583283f2dee3f07704356a2ca"} Jan 21 17:14:52 crc kubenswrapper[4834]: I0121 17:14:52.715942 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scm4f" event={"ID":"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f","Type":"ContainerStarted","Data":"db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7"} Jan 21 17:14:53 crc kubenswrapper[4834]: I0121 17:14:53.734061 4834 generic.go:334] "Generic (PLEG): container finished" podID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerID="db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7" exitCode=0 Jan 21 17:14:53 crc kubenswrapper[4834]: I0121 17:14:53.734159 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scm4f" event={"ID":"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f","Type":"ContainerDied","Data":"db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7"} Jan 21 17:14:54 crc kubenswrapper[4834]: I0121 17:14:54.744484 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scm4f" event={"ID":"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f","Type":"ContainerStarted","Data":"9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad"} Jan 21 17:14:54 crc kubenswrapper[4834]: I0121 17:14:54.769479 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-scm4f" podStartSLOduration=2.225970969 podStartE2EDuration="4.769461219s" podCreationTimestamp="2026-01-21 17:14:50 +0000 UTC" firstStartedPulling="2026-01-21 17:14:51.703128803 +0000 UTC m=+9837.677477858" lastFinishedPulling="2026-01-21 17:14:54.246619063 +0000 UTC m=+9840.220968108" observedRunningTime="2026-01-21 17:14:54.760834509 +0000 UTC m=+9840.735183554" watchObservedRunningTime="2026-01-21 17:14:54.769461219 +0000 UTC m=+9840.743810264" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.179433 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6"] Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.181954 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.184466 4834 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.187470 4834 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.201607 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6"] Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.322173 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2bn\" (UniqueName: \"kubernetes.io/projected/6d8c300d-4b72-430c-b266-4748e91253f8-kube-api-access-ln2bn\") pod \"collect-profiles-29483595-cbwq6\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.322260 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d8c300d-4b72-430c-b266-4748e91253f8-secret-volume\") pod \"collect-profiles-29483595-cbwq6\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.322421 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d8c300d-4b72-430c-b266-4748e91253f8-config-volume\") pod \"collect-profiles-29483595-cbwq6\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.425037 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d8c300d-4b72-430c-b266-4748e91253f8-config-volume\") pod \"collect-profiles-29483595-cbwq6\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.425292 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2bn\" (UniqueName: \"kubernetes.io/projected/6d8c300d-4b72-430c-b266-4748e91253f8-kube-api-access-ln2bn\") pod \"collect-profiles-29483595-cbwq6\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.426172 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d8c300d-4b72-430c-b266-4748e91253f8-secret-volume\") pod \"collect-profiles-29483595-cbwq6\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.427138 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d8c300d-4b72-430c-b266-4748e91253f8-config-volume\") pod \"collect-profiles-29483595-cbwq6\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.433223 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d8c300d-4b72-430c-b266-4748e91253f8-secret-volume\") pod \"collect-profiles-29483595-cbwq6\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.449481 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.451023 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.452960 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2bn\" (UniqueName: \"kubernetes.io/projected/6d8c300d-4b72-430c-b266-4748e91253f8-kube-api-access-ln2bn\") pod \"collect-profiles-29483595-cbwq6\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.506328 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.512905 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.872542 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:15:00 crc kubenswrapper[4834]: I0121 17:15:00.928619 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-scm4f"] Jan 21 17:15:01 crc kubenswrapper[4834]: I0121 17:15:00.991614 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6"] Jan 21 17:15:01 crc kubenswrapper[4834]: I0121 17:15:01.833229 4834 generic.go:334] "Generic (PLEG): container finished" podID="6d8c300d-4b72-430c-b266-4748e91253f8" containerID="947f62be5f435667f60f2fa03c72fa1063f92ad57e90d64b0d8d82c63e6436ad" exitCode=0 Jan 21 17:15:01 crc kubenswrapper[4834]: I0121 17:15:01.833770 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" event={"ID":"6d8c300d-4b72-430c-b266-4748e91253f8","Type":"ContainerDied","Data":"947f62be5f435667f60f2fa03c72fa1063f92ad57e90d64b0d8d82c63e6436ad"} Jan 21 17:15:01 crc kubenswrapper[4834]: I0121 17:15:01.833832 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" event={"ID":"6d8c300d-4b72-430c-b266-4748e91253f8","Type":"ContainerStarted","Data":"991cec2dc88146f3dc2f6c813806b07580751126f1dc2014ecdf0a314351f607"} Jan 21 17:15:02 crc kubenswrapper[4834]: I0121 17:15:02.841481 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-scm4f" podUID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerName="registry-server" containerID="cri-o://9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad" gracePeriod=2 Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.223214 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.294518 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln2bn\" (UniqueName: \"kubernetes.io/projected/6d8c300d-4b72-430c-b266-4748e91253f8-kube-api-access-ln2bn\") pod \"6d8c300d-4b72-430c-b266-4748e91253f8\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.295805 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d8c300d-4b72-430c-b266-4748e91253f8-secret-volume\") pod \"6d8c300d-4b72-430c-b266-4748e91253f8\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.296116 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d8c300d-4b72-430c-b266-4748e91253f8-config-volume\") pod \"6d8c300d-4b72-430c-b266-4748e91253f8\" (UID: \"6d8c300d-4b72-430c-b266-4748e91253f8\") " Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.296661 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8c300d-4b72-430c-b266-4748e91253f8-config-volume" (OuterVolumeSpecName: "config-volume") pod "6d8c300d-4b72-430c-b266-4748e91253f8" (UID: "6d8c300d-4b72-430c-b266-4748e91253f8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.296946 4834 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d8c300d-4b72-430c-b266-4748e91253f8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.301639 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8c300d-4b72-430c-b266-4748e91253f8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6d8c300d-4b72-430c-b266-4748e91253f8" (UID: "6d8c300d-4b72-430c-b266-4748e91253f8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.301688 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8c300d-4b72-430c-b266-4748e91253f8-kube-api-access-ln2bn" (OuterVolumeSpecName: "kube-api-access-ln2bn") pod "6d8c300d-4b72-430c-b266-4748e91253f8" (UID: "6d8c300d-4b72-430c-b266-4748e91253f8"). InnerVolumeSpecName "kube-api-access-ln2bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.399302 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln2bn\" (UniqueName: \"kubernetes.io/projected/6d8c300d-4b72-430c-b266-4748e91253f8-kube-api-access-ln2bn\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.399818 4834 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d8c300d-4b72-430c-b266-4748e91253f8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.754427 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.852995 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.853092 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483595-cbwq6" event={"ID":"6d8c300d-4b72-430c-b266-4748e91253f8","Type":"ContainerDied","Data":"991cec2dc88146f3dc2f6c813806b07580751126f1dc2014ecdf0a314351f607"} Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.853203 4834 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="991cec2dc88146f3dc2f6c813806b07580751126f1dc2014ecdf0a314351f607" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.856485 4834 generic.go:334] "Generic (PLEG): container finished" podID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerID="9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad" exitCode=0 Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.856533 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scm4f" event={"ID":"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f","Type":"ContainerDied","Data":"9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad"} Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.856567 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-scm4f" event={"ID":"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f","Type":"ContainerDied","Data":"028bb9cdf085a42e8f3042c6ea3992956fb14ca583283f2dee3f07704356a2ca"} Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.856591 4834 scope.go:117] "RemoveContainer" containerID="9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.856749 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-scm4f" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.882708 4834 scope.go:117] "RemoveContainer" containerID="db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.902514 4834 scope.go:117] "RemoveContainer" containerID="82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.909782 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-catalog-content\") pod \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.909947 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-utilities\") pod \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.910144 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j758j\" (UniqueName: \"kubernetes.io/projected/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-kube-api-access-j758j\") pod \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\" (UID: \"3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f\") " Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.911080 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-utilities" (OuterVolumeSpecName: "utilities") pod "3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" (UID: "3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.914491 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-kube-api-access-j758j" (OuterVolumeSpecName: "kube-api-access-j758j") pod "3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" (UID: "3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f"). InnerVolumeSpecName "kube-api-access-j758j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.922225 4834 scope.go:117] "RemoveContainer" containerID="9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad" Jan 21 17:15:03 crc kubenswrapper[4834]: E0121 17:15:03.922824 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad\": container with ID starting with 9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad not found: ID does not exist" containerID="9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.922878 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad"} err="failed to get container status \"9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad\": rpc error: code = NotFound desc = could not find container \"9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad\": container with ID starting with 9ebdadeef2b5dbcbf5bece4edd7e16f721049fb4c312b3071a3a2d2c754a11ad not found: ID does not exist" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.922909 4834 scope.go:117] "RemoveContainer" containerID="db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7" Jan 21 17:15:03 crc kubenswrapper[4834]: E0121 17:15:03.923275 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7\": container with ID starting with db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7 not found: ID does not exist" containerID="db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.923308 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7"} err="failed to get container status \"db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7\": rpc error: code = NotFound desc = could not find container \"db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7\": container with ID starting with db3688a1bb063fcd33d15f63b9044d30b7aa52c0277e2d3fbb51502e688d60c7 not found: ID does not exist" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.923328 4834 scope.go:117] "RemoveContainer" containerID="82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995" Jan 21 17:15:03 crc kubenswrapper[4834]: E0121 17:15:03.923552 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995\": container with ID starting with 82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995 not found: ID does not exist" containerID="82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.923585 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995"} err="failed to get container status \"82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995\": rpc error: code = NotFound desc = could not find container \"82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995\": container with ID starting with 82da2e755583a4e32194e1cea49ab75b4d7b6fd1d84d1f6984e673591c117995 not found: ID does not exist" Jan 21 17:15:03 crc kubenswrapper[4834]: I0121 17:15:03.933099 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" (UID: "3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:15:04 crc kubenswrapper[4834]: I0121 17:15:04.013089 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:04 crc kubenswrapper[4834]: I0121 17:15:04.013120 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:04 crc kubenswrapper[4834]: I0121 17:15:04.013133 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j758j\" (UniqueName: \"kubernetes.io/projected/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f-kube-api-access-j758j\") on node \"crc\" DevicePath \"\"" Jan 21 17:15:04 crc kubenswrapper[4834]: I0121 17:15:04.200658 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-scm4f"] Jan 21 17:15:04 crc kubenswrapper[4834]: I0121 17:15:04.215132 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-scm4f"] Jan 21 17:15:04 crc kubenswrapper[4834]: I0121 17:15:04.305687 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6"] Jan 21 17:15:04 crc kubenswrapper[4834]: I0121 17:15:04.316653 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-z8nb6"] Jan 21 17:15:04 crc kubenswrapper[4834]: I0121 17:15:04.341903 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9c0c9c-7306-41ef-a294-eb99498e8aee" path="/var/lib/kubelet/pods/1f9c0c9c-7306-41ef-a294-eb99498e8aee/volumes" Jan 21 17:15:04 crc kubenswrapper[4834]: I0121 17:15:04.343020 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" path="/var/lib/kubelet/pods/3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f/volumes" Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.114234 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.114848 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.114900 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.117343 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.117410 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" gracePeriod=600 Jan 21 17:15:17 crc kubenswrapper[4834]: E0121 17:15:17.236900 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.426347 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" exitCode=0 Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.426431 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f"} Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.426658 4834 scope.go:117] "RemoveContainer" containerID="9b1902e003bed1592c629b84b52f304e4bacb3a409ec3af4c9380972e3f1792d" Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.427122 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:15:17 crc kubenswrapper[4834]: E0121 17:15:17.427399 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:15:17 crc kubenswrapper[4834]: I0121 17:15:17.795913 4834 scope.go:117] "RemoveContainer" containerID="741bf92b32217ad9b7cec681775c01f03d7fa03c25a2b70a1a60ea91777a49e8" Jan 21 17:15:29 crc kubenswrapper[4834]: I0121 17:15:29.325357 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:15:29 crc kubenswrapper[4834]: E0121 17:15:29.326394 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:15:44 crc kubenswrapper[4834]: I0121 17:15:44.333827 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:15:44 crc kubenswrapper[4834]: E0121 17:15:44.334704 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:15:58 crc kubenswrapper[4834]: I0121 17:15:58.333562 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:15:58 crc kubenswrapper[4834]: E0121 17:15:58.334414 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:16:11 crc kubenswrapper[4834]: I0121 17:16:11.326056 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:16:11 crc kubenswrapper[4834]: E0121 17:16:11.326760 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:16:24 crc kubenswrapper[4834]: I0121 17:16:24.333589 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:16:24 crc kubenswrapper[4834]: E0121 17:16:24.334717 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:16:38 crc kubenswrapper[4834]: I0121 17:16:38.326668 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:16:38 crc kubenswrapper[4834]: E0121 17:16:38.329167 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:16:50 crc kubenswrapper[4834]: I0121 17:16:50.325422 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:16:50 crc kubenswrapper[4834]: E0121 17:16:50.326294 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:17:01 crc kubenswrapper[4834]: I0121 17:17:01.325251 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:17:01 crc kubenswrapper[4834]: E0121 17:17:01.326324 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:17:16 crc kubenswrapper[4834]: I0121 17:17:16.325245 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:17:16 crc kubenswrapper[4834]: E0121 17:17:16.326126 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:17:30 crc kubenswrapper[4834]: I0121 17:17:30.325403 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:17:30 crc kubenswrapper[4834]: E0121 17:17:30.326570 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:17:43 crc kubenswrapper[4834]: I0121 17:17:43.326633 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:17:43 crc kubenswrapper[4834]: E0121 17:17:43.327420 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:17:58 crc kubenswrapper[4834]: I0121 17:17:58.325961 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:17:58 crc kubenswrapper[4834]: E0121 17:17:58.328251 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:18:11 crc kubenswrapper[4834]: I0121 17:18:11.326844 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:18:11 crc kubenswrapper[4834]: E0121 17:18:11.327738 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:18:26 crc kubenswrapper[4834]: I0121 17:18:26.325657 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:18:26 crc kubenswrapper[4834]: E0121 17:18:26.327843 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:18:38 crc kubenswrapper[4834]: I0121 17:18:38.326504 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:18:38 crc kubenswrapper[4834]: E0121 17:18:38.328170 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:18:49 crc kubenswrapper[4834]: I0121 17:18:49.325728 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:18:49 crc kubenswrapper[4834]: E0121 17:18:49.326910 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:19:04 crc kubenswrapper[4834]: I0121 17:19:04.339527 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:19:04 crc kubenswrapper[4834]: E0121 17:19:04.340443 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:19:16 crc kubenswrapper[4834]: I0121 17:19:16.326271 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:19:16 crc kubenswrapper[4834]: E0121 17:19:16.327371 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:19:18 crc kubenswrapper[4834]: I0121 17:19:18.234362 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-cdhjz" podUID="9637e38c-b666-480c-a92a-71b40d1a41d0" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 17:19:27 crc kubenswrapper[4834]: I0121 17:19:27.326006 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:19:27 crc kubenswrapper[4834]: E0121 17:19:27.328048 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:19:38 crc kubenswrapper[4834]: I0121 17:19:38.325574 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:19:38 crc kubenswrapper[4834]: E0121 17:19:38.326496 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:19:51 crc kubenswrapper[4834]: I0121 17:19:51.325442 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:19:51 crc kubenswrapper[4834]: E0121 17:19:51.328365 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:20:05 crc kubenswrapper[4834]: I0121 17:20:05.324397 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:20:05 crc kubenswrapper[4834]: E0121 17:20:05.325206 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:20:16 crc kubenswrapper[4834]: I0121 17:20:16.324763 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:20:16 crc kubenswrapper[4834]: E0121 17:20:16.325689 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:20:28 crc kubenswrapper[4834]: I0121 17:20:28.325446 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:20:29 crc kubenswrapper[4834]: I0121 17:20:29.075578 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"b0d258c290319e0061c7c26365fee62104726b31db58aaa12ae65800a2961c7e"} Jan 21 17:22:47 crc kubenswrapper[4834]: I0121 17:22:47.113956 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:22:47 crc kubenswrapper[4834]: I0121 17:22:47.114912 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:23:17 crc kubenswrapper[4834]: I0121 17:23:17.113906 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:23:17 crc kubenswrapper[4834]: I0121 17:23:17.114542 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.255730 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fj28v"] Jan 21 17:23:45 crc kubenswrapper[4834]: E0121 17:23:45.256547 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerName="extract-utilities" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.256559 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerName="extract-utilities" Jan 21 17:23:45 crc kubenswrapper[4834]: E0121 17:23:45.256591 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8c300d-4b72-430c-b266-4748e91253f8" containerName="collect-profiles" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.256597 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8c300d-4b72-430c-b266-4748e91253f8" containerName="collect-profiles" Jan 21 17:23:45 crc kubenswrapper[4834]: E0121 17:23:45.256605 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerName="extract-content" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.256611 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerName="extract-content" Jan 21 17:23:45 crc kubenswrapper[4834]: E0121 17:23:45.256651 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerName="registry-server" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.256657 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerName="registry-server" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.256854 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8c300d-4b72-430c-b266-4748e91253f8" containerName="collect-profiles" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.256869 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bcef3a5-fd9e-4fa7-9d9b-5a88033a2a2f" containerName="registry-server" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.258736 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.275296 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fj28v"] Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.410218 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-catalog-content\") pod \"community-operators-fj28v\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.410445 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvnc\" (UniqueName: \"kubernetes.io/projected/480cf2ef-de8f-4c5e-8bcf-c685c2647543-kube-api-access-msvnc\") pod \"community-operators-fj28v\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.410568 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-utilities\") pod \"community-operators-fj28v\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.452347 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jsp82"] Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.454459 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.466764 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsp82"] Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.513231 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-catalog-content\") pod \"community-operators-fj28v\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.513309 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msvnc\" (UniqueName: \"kubernetes.io/projected/480cf2ef-de8f-4c5e-8bcf-c685c2647543-kube-api-access-msvnc\") pod \"community-operators-fj28v\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.513341 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-utilities\") pod \"community-operators-fj28v\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.513830 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-catalog-content\") pod \"community-operators-fj28v\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.513895 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-utilities\") pod \"community-operators-fj28v\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.536970 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msvnc\" (UniqueName: \"kubernetes.io/projected/480cf2ef-de8f-4c5e-8bcf-c685c2647543-kube-api-access-msvnc\") pod \"community-operators-fj28v\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.580267 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.615140 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-utilities\") pod \"redhat-operators-jsp82\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.615211 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6q8m\" (UniqueName: \"kubernetes.io/projected/49f9c2b8-0a85-46a8-bd5a-d339757d7792-kube-api-access-w6q8m\") pod \"redhat-operators-jsp82\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.615366 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-catalog-content\") pod \"redhat-operators-jsp82\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.717226 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-utilities\") pod \"redhat-operators-jsp82\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.717581 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6q8m\" (UniqueName: \"kubernetes.io/projected/49f9c2b8-0a85-46a8-bd5a-d339757d7792-kube-api-access-w6q8m\") pod \"redhat-operators-jsp82\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.717983 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-catalog-content\") pod \"redhat-operators-jsp82\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.718687 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-catalog-content\") pod \"redhat-operators-jsp82\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.722106 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-utilities\") pod \"redhat-operators-jsp82\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.756390 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6q8m\" (UniqueName: \"kubernetes.io/projected/49f9c2b8-0a85-46a8-bd5a-d339757d7792-kube-api-access-w6q8m\") pod \"redhat-operators-jsp82\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:45 crc kubenswrapper[4834]: I0121 17:23:45.776354 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:46 crc kubenswrapper[4834]: I0121 17:23:46.337451 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fj28v"] Jan 21 17:23:46 crc kubenswrapper[4834]: I0121 17:23:46.384842 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsp82"] Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.117060 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.117498 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.117552 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.118561 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0d258c290319e0061c7c26365fee62104726b31db58aaa12ae65800a2961c7e"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.118637 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://b0d258c290319e0061c7c26365fee62104726b31db58aaa12ae65800a2961c7e" gracePeriod=600 Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.425818 4834 generic.go:334] "Generic (PLEG): container finished" podID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerID="b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84" exitCode=0 Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.425886 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsp82" event={"ID":"49f9c2b8-0a85-46a8-bd5a-d339757d7792","Type":"ContainerDied","Data":"b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84"} Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.426262 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsp82" event={"ID":"49f9c2b8-0a85-46a8-bd5a-d339757d7792","Type":"ContainerStarted","Data":"e22cc8228b3459ea37ffde8a5818d5b195a2bcbdda37d8afe15b637143787ff4"} Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.428198 4834 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.430017 4834 generic.go:334] "Generic (PLEG): container finished" podID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerID="ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1" exitCode=0 Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.430088 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj28v" event={"ID":"480cf2ef-de8f-4c5e-8bcf-c685c2647543","Type":"ContainerDied","Data":"ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1"} Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.430113 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj28v" event={"ID":"480cf2ef-de8f-4c5e-8bcf-c685c2647543","Type":"ContainerStarted","Data":"1df466c02fd4f94559832512a9e69f585e584a142e3e474d0c14a9fd204f5867"} Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.433257 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="b0d258c290319e0061c7c26365fee62104726b31db58aaa12ae65800a2961c7e" exitCode=0 Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.433285 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"b0d258c290319e0061c7c26365fee62104726b31db58aaa12ae65800a2961c7e"} Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.433313 4834 scope.go:117] "RemoveContainer" containerID="f06ee9311455fa122f54a4791de6278542789d3f500381def5af2d8f6b80f34f" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.857278 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4jpqp"] Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.861794 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.879877 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jpqp"] Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.891623 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-utilities\") pod \"certified-operators-4jpqp\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.891708 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-catalog-content\") pod \"certified-operators-4jpqp\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.891834 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssffj\" (UniqueName: \"kubernetes.io/projected/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-kube-api-access-ssffj\") pod \"certified-operators-4jpqp\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.993746 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssffj\" (UniqueName: \"kubernetes.io/projected/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-kube-api-access-ssffj\") pod \"certified-operators-4jpqp\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.994051 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-utilities\") pod \"certified-operators-4jpqp\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.994216 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-catalog-content\") pod \"certified-operators-4jpqp\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.994810 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-utilities\") pod \"certified-operators-4jpqp\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:47 crc kubenswrapper[4834]: I0121 17:23:47.994772 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-catalog-content\") pod \"certified-operators-4jpqp\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:48 crc kubenswrapper[4834]: I0121 17:23:48.022288 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssffj\" (UniqueName: \"kubernetes.io/projected/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-kube-api-access-ssffj\") pod \"certified-operators-4jpqp\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:48 crc kubenswrapper[4834]: I0121 17:23:48.197219 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:48 crc kubenswrapper[4834]: I0121 17:23:48.473790 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerStarted","Data":"a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec"} Jan 21 17:23:48 crc kubenswrapper[4834]: I0121 17:23:48.748041 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jpqp"] Jan 21 17:23:49 crc kubenswrapper[4834]: W0121 17:23:49.208311 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44881270_6a88_41c6_ba63_f9fc0ca0e6e7.slice/crio-b3b3e7a1fb395a402e225efba19d42755a7439a1b5adf0d2b6c52df834227324 WatchSource:0}: Error finding container b3b3e7a1fb395a402e225efba19d42755a7439a1b5adf0d2b6c52df834227324: Status 404 returned error can't find the container with id b3b3e7a1fb395a402e225efba19d42755a7439a1b5adf0d2b6c52df834227324 Jan 21 17:23:49 crc kubenswrapper[4834]: I0121 17:23:49.490549 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsp82" event={"ID":"49f9c2b8-0a85-46a8-bd5a-d339757d7792","Type":"ContainerStarted","Data":"8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70"} Jan 21 17:23:49 crc kubenswrapper[4834]: I0121 17:23:49.496645 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jpqp" event={"ID":"44881270-6a88-41c6-ba63-f9fc0ca0e6e7","Type":"ContainerStarted","Data":"b3b3e7a1fb395a402e225efba19d42755a7439a1b5adf0d2b6c52df834227324"} Jan 21 17:23:50 crc kubenswrapper[4834]: I0121 17:23:50.508620 4834 generic.go:334] "Generic (PLEG): container finished" podID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerID="9fdd2e89ebc4815816320f2313ff45077b7c809472a2e4258c1d4d8e8691b3ab" exitCode=0 Jan 21 17:23:50 crc kubenswrapper[4834]: I0121 17:23:50.508707 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jpqp" event={"ID":"44881270-6a88-41c6-ba63-f9fc0ca0e6e7","Type":"ContainerDied","Data":"9fdd2e89ebc4815816320f2313ff45077b7c809472a2e4258c1d4d8e8691b3ab"} Jan 21 17:23:50 crc kubenswrapper[4834]: I0121 17:23:50.514208 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj28v" event={"ID":"480cf2ef-de8f-4c5e-8bcf-c685c2647543","Type":"ContainerStarted","Data":"857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c"} Jan 21 17:23:51 crc kubenswrapper[4834]: I0121 17:23:51.536049 4834 generic.go:334] "Generic (PLEG): container finished" podID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerID="857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c" exitCode=0 Jan 21 17:23:51 crc kubenswrapper[4834]: I0121 17:23:51.536649 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj28v" event={"ID":"480cf2ef-de8f-4c5e-8bcf-c685c2647543","Type":"ContainerDied","Data":"857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c"} Jan 21 17:23:52 crc kubenswrapper[4834]: I0121 17:23:52.553191 4834 generic.go:334] "Generic (PLEG): container finished" podID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerID="8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70" exitCode=0 Jan 21 17:23:52 crc kubenswrapper[4834]: I0121 17:23:52.553258 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsp82" event={"ID":"49f9c2b8-0a85-46a8-bd5a-d339757d7792","Type":"ContainerDied","Data":"8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70"} Jan 21 17:23:52 crc kubenswrapper[4834]: I0121 17:23:52.557639 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jpqp" event={"ID":"44881270-6a88-41c6-ba63-f9fc0ca0e6e7","Type":"ContainerStarted","Data":"0e79b630efb7c8b78acfcb2118cca3c61130cf4ff8f207a20e7059584f411c37"} Jan 21 17:23:53 crc kubenswrapper[4834]: I0121 17:23:53.572573 4834 generic.go:334] "Generic (PLEG): container finished" podID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerID="0e79b630efb7c8b78acfcb2118cca3c61130cf4ff8f207a20e7059584f411c37" exitCode=0 Jan 21 17:23:53 crc kubenswrapper[4834]: I0121 17:23:53.572691 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jpqp" event={"ID":"44881270-6a88-41c6-ba63-f9fc0ca0e6e7","Type":"ContainerDied","Data":"0e79b630efb7c8b78acfcb2118cca3c61130cf4ff8f207a20e7059584f411c37"} Jan 21 17:23:53 crc kubenswrapper[4834]: I0121 17:23:53.588462 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj28v" event={"ID":"480cf2ef-de8f-4c5e-8bcf-c685c2647543","Type":"ContainerStarted","Data":"ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240"} Jan 21 17:23:53 crc kubenswrapper[4834]: I0121 17:23:53.652030 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fj28v" podStartSLOduration=3.417533851 podStartE2EDuration="8.651998866s" podCreationTimestamp="2026-01-21 17:23:45 +0000 UTC" firstStartedPulling="2026-01-21 17:23:47.432384326 +0000 UTC m=+10373.406733391" lastFinishedPulling="2026-01-21 17:23:52.666849361 +0000 UTC m=+10378.641198406" observedRunningTime="2026-01-21 17:23:53.625213129 +0000 UTC m=+10379.599562184" watchObservedRunningTime="2026-01-21 17:23:53.651998866 +0000 UTC m=+10379.626347921" Jan 21 17:23:54 crc kubenswrapper[4834]: I0121 17:23:54.605740 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsp82" event={"ID":"49f9c2b8-0a85-46a8-bd5a-d339757d7792","Type":"ContainerStarted","Data":"757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c"} Jan 21 17:23:54 crc kubenswrapper[4834]: I0121 17:23:54.608967 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jpqp" event={"ID":"44881270-6a88-41c6-ba63-f9fc0ca0e6e7","Type":"ContainerStarted","Data":"15a8bb446dc6ecbc5a7325e5ba0072bc5b5e97865ce7c03ae6a6c7a3728943ac"} Jan 21 17:23:54 crc kubenswrapper[4834]: I0121 17:23:54.631056 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jsp82" podStartSLOduration=3.660434048 podStartE2EDuration="9.63103635s" podCreationTimestamp="2026-01-21 17:23:45 +0000 UTC" firstStartedPulling="2026-01-21 17:23:47.427879255 +0000 UTC m=+10373.402228310" lastFinishedPulling="2026-01-21 17:23:53.398481567 +0000 UTC m=+10379.372830612" observedRunningTime="2026-01-21 17:23:54.624558648 +0000 UTC m=+10380.598907713" watchObservedRunningTime="2026-01-21 17:23:54.63103635 +0000 UTC m=+10380.605385395" Jan 21 17:23:54 crc kubenswrapper[4834]: I0121 17:23:54.646614 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4jpqp" podStartSLOduration=4.167165941 podStartE2EDuration="7.646595938s" podCreationTimestamp="2026-01-21 17:23:47 +0000 UTC" firstStartedPulling="2026-01-21 17:23:50.511422719 +0000 UTC m=+10376.485771764" lastFinishedPulling="2026-01-21 17:23:53.990852716 +0000 UTC m=+10379.965201761" observedRunningTime="2026-01-21 17:23:54.641609391 +0000 UTC m=+10380.615958436" watchObservedRunningTime="2026-01-21 17:23:54.646595938 +0000 UTC m=+10380.620944983" Jan 21 17:23:55 crc kubenswrapper[4834]: I0121 17:23:55.581197 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:55 crc kubenswrapper[4834]: I0121 17:23:55.581281 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:55 crc kubenswrapper[4834]: I0121 17:23:55.666869 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:23:55 crc kubenswrapper[4834]: I0121 17:23:55.777321 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:55 crc kubenswrapper[4834]: I0121 17:23:55.777373 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:23:56 crc kubenswrapper[4834]: I0121 17:23:56.826643 4834 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jsp82" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerName="registry-server" probeResult="failure" output=< Jan 21 17:23:56 crc kubenswrapper[4834]: timeout: failed to connect service ":50051" within 1s Jan 21 17:23:56 crc kubenswrapper[4834]: > Jan 21 17:23:58 crc kubenswrapper[4834]: I0121 17:23:58.198348 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:58 crc kubenswrapper[4834]: I0121 17:23:58.199872 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:23:58 crc kubenswrapper[4834]: I0121 17:23:58.261094 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:24:00 crc kubenswrapper[4834]: I0121 17:24:00.059266 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:24:00 crc kubenswrapper[4834]: I0121 17:24:00.650775 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jpqp"] Jan 21 17:24:01 crc kubenswrapper[4834]: I0121 17:24:01.702641 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4jpqp" podUID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerName="registry-server" containerID="cri-o://15a8bb446dc6ecbc5a7325e5ba0072bc5b5e97865ce7c03ae6a6c7a3728943ac" gracePeriod=2 Jan 21 17:24:02 crc kubenswrapper[4834]: I0121 17:24:02.723098 4834 generic.go:334] "Generic (PLEG): container finished" podID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerID="15a8bb446dc6ecbc5a7325e5ba0072bc5b5e97865ce7c03ae6a6c7a3728943ac" exitCode=0 Jan 21 17:24:02 crc kubenswrapper[4834]: I0121 17:24:02.723625 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jpqp" event={"ID":"44881270-6a88-41c6-ba63-f9fc0ca0e6e7","Type":"ContainerDied","Data":"15a8bb446dc6ecbc5a7325e5ba0072bc5b5e97865ce7c03ae6a6c7a3728943ac"} Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.036424 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.183260 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssffj\" (UniqueName: \"kubernetes.io/projected/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-kube-api-access-ssffj\") pod \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.183378 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-utilities\") pod \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.184220 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-utilities" (OuterVolumeSpecName: "utilities") pod "44881270-6a88-41c6-ba63-f9fc0ca0e6e7" (UID: "44881270-6a88-41c6-ba63-f9fc0ca0e6e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.184362 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-catalog-content\") pod \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\" (UID: \"44881270-6a88-41c6-ba63-f9fc0ca0e6e7\") " Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.185150 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.190361 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-kube-api-access-ssffj" (OuterVolumeSpecName: "kube-api-access-ssffj") pod "44881270-6a88-41c6-ba63-f9fc0ca0e6e7" (UID: "44881270-6a88-41c6-ba63-f9fc0ca0e6e7"). InnerVolumeSpecName "kube-api-access-ssffj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.254561 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44881270-6a88-41c6-ba63-f9fc0ca0e6e7" (UID: "44881270-6a88-41c6-ba63-f9fc0ca0e6e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.287062 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssffj\" (UniqueName: \"kubernetes.io/projected/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-kube-api-access-ssffj\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.287108 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44881270-6a88-41c6-ba63-f9fc0ca0e6e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.737969 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jpqp" event={"ID":"44881270-6a88-41c6-ba63-f9fc0ca0e6e7","Type":"ContainerDied","Data":"b3b3e7a1fb395a402e225efba19d42755a7439a1b5adf0d2b6c52df834227324"} Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.738020 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jpqp" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.738037 4834 scope.go:117] "RemoveContainer" containerID="15a8bb446dc6ecbc5a7325e5ba0072bc5b5e97865ce7c03ae6a6c7a3728943ac" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.774746 4834 scope.go:117] "RemoveContainer" containerID="0e79b630efb7c8b78acfcb2118cca3c61130cf4ff8f207a20e7059584f411c37" Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.779453 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jpqp"] Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.789240 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4jpqp"] Jan 21 17:24:03 crc kubenswrapper[4834]: I0121 17:24:03.806862 4834 scope.go:117] "RemoveContainer" containerID="9fdd2e89ebc4815816320f2313ff45077b7c809472a2e4258c1d4d8e8691b3ab" Jan 21 17:24:04 crc kubenswrapper[4834]: I0121 17:24:04.340526 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" path="/var/lib/kubelet/pods/44881270-6a88-41c6-ba63-f9fc0ca0e6e7/volumes" Jan 21 17:24:05 crc kubenswrapper[4834]: I0121 17:24:05.653435 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:24:05 crc kubenswrapper[4834]: I0121 17:24:05.821090 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:24:05 crc kubenswrapper[4834]: I0121 17:24:05.870486 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:24:06 crc kubenswrapper[4834]: I0121 17:24:06.449678 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsp82"] Jan 21 17:24:07 crc kubenswrapper[4834]: I0121 17:24:07.787035 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jsp82" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerName="registry-server" containerID="cri-o://757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c" gracePeriod=2 Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.312143 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.418689 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-catalog-content\") pod \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.418761 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6q8m\" (UniqueName: \"kubernetes.io/projected/49f9c2b8-0a85-46a8-bd5a-d339757d7792-kube-api-access-w6q8m\") pod \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.418792 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-utilities\") pod \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\" (UID: \"49f9c2b8-0a85-46a8-bd5a-d339757d7792\") " Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.421016 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-utilities" (OuterVolumeSpecName: "utilities") pod "49f9c2b8-0a85-46a8-bd5a-d339757d7792" (UID: "49f9c2b8-0a85-46a8-bd5a-d339757d7792"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.425006 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f9c2b8-0a85-46a8-bd5a-d339757d7792-kube-api-access-w6q8m" (OuterVolumeSpecName: "kube-api-access-w6q8m") pod "49f9c2b8-0a85-46a8-bd5a-d339757d7792" (UID: "49f9c2b8-0a85-46a8-bd5a-d339757d7792"). InnerVolumeSpecName "kube-api-access-w6q8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.522714 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6q8m\" (UniqueName: \"kubernetes.io/projected/49f9c2b8-0a85-46a8-bd5a-d339757d7792-kube-api-access-w6q8m\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.522762 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.545098 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49f9c2b8-0a85-46a8-bd5a-d339757d7792" (UID: "49f9c2b8-0a85-46a8-bd5a-d339757d7792"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.626645 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f9c2b8-0a85-46a8-bd5a-d339757d7792-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.801298 4834 generic.go:334] "Generic (PLEG): container finished" podID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerID="757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c" exitCode=0 Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.801387 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsp82" event={"ID":"49f9c2b8-0a85-46a8-bd5a-d339757d7792","Type":"ContainerDied","Data":"757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c"} Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.802287 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsp82" event={"ID":"49f9c2b8-0a85-46a8-bd5a-d339757d7792","Type":"ContainerDied","Data":"e22cc8228b3459ea37ffde8a5818d5b195a2bcbdda37d8afe15b637143787ff4"} Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.801439 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsp82" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.802360 4834 scope.go:117] "RemoveContainer" containerID="757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.844618 4834 scope.go:117] "RemoveContainer" containerID="8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.853740 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsp82"] Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.871805 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jsp82"] Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.879259 4834 scope.go:117] "RemoveContainer" containerID="b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.890903 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fj28v"] Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.891226 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fj28v" podUID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerName="registry-server" containerID="cri-o://ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240" gracePeriod=2 Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.919055 4834 scope.go:117] "RemoveContainer" containerID="757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c" Jan 21 17:24:08 crc kubenswrapper[4834]: E0121 17:24:08.919510 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c\": container with ID starting with 757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c not found: ID does not exist" containerID="757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.919565 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c"} err="failed to get container status \"757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c\": rpc error: code = NotFound desc = could not find container \"757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c\": container with ID starting with 757e18232e66f9b96d31a66c3a64a00713b98c1272ac1c7410f0f21068a9415c not found: ID does not exist" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.919604 4834 scope.go:117] "RemoveContainer" containerID="8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70" Jan 21 17:24:08 crc kubenswrapper[4834]: E0121 17:24:08.919958 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70\": container with ID starting with 8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70 not found: ID does not exist" containerID="8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.920018 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70"} err="failed to get container status \"8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70\": rpc error: code = NotFound desc = could not find container \"8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70\": container with ID starting with 8ea8b247bdff789991837eeee8bff900d924ec42eb60e6649768a4d3b5944d70 not found: ID does not exist" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.920049 4834 scope.go:117] "RemoveContainer" containerID="b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84" Jan 21 17:24:08 crc kubenswrapper[4834]: E0121 17:24:08.920339 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84\": container with ID starting with b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84 not found: ID does not exist" containerID="b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84" Jan 21 17:24:08 crc kubenswrapper[4834]: I0121 17:24:08.920370 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84"} err="failed to get container status \"b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84\": rpc error: code = NotFound desc = could not find container \"b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84\": container with ID starting with b316fe0a55e7affe89b860bc2b515d16b63954d5a9e906177b440949cd56ed84 not found: ID does not exist" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.385993 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.449578 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msvnc\" (UniqueName: \"kubernetes.io/projected/480cf2ef-de8f-4c5e-8bcf-c685c2647543-kube-api-access-msvnc\") pod \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.449640 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-utilities\") pod \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.449789 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-catalog-content\") pod \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\" (UID: \"480cf2ef-de8f-4c5e-8bcf-c685c2647543\") " Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.450514 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-utilities" (OuterVolumeSpecName: "utilities") pod "480cf2ef-de8f-4c5e-8bcf-c685c2647543" (UID: "480cf2ef-de8f-4c5e-8bcf-c685c2647543"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.456485 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480cf2ef-de8f-4c5e-8bcf-c685c2647543-kube-api-access-msvnc" (OuterVolumeSpecName: "kube-api-access-msvnc") pod "480cf2ef-de8f-4c5e-8bcf-c685c2647543" (UID: "480cf2ef-de8f-4c5e-8bcf-c685c2647543"). InnerVolumeSpecName "kube-api-access-msvnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.505617 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "480cf2ef-de8f-4c5e-8bcf-c685c2647543" (UID: "480cf2ef-de8f-4c5e-8bcf-c685c2647543"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.552797 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.552833 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/480cf2ef-de8f-4c5e-8bcf-c685c2647543-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.552850 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msvnc\" (UniqueName: \"kubernetes.io/projected/480cf2ef-de8f-4c5e-8bcf-c685c2647543-kube-api-access-msvnc\") on node \"crc\" DevicePath \"\"" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.821818 4834 generic.go:334] "Generic (PLEG): container finished" podID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerID="ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240" exitCode=0 Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.821913 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj28v" event={"ID":"480cf2ef-de8f-4c5e-8bcf-c685c2647543","Type":"ContainerDied","Data":"ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240"} Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.822027 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fj28v" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.822031 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fj28v" event={"ID":"480cf2ef-de8f-4c5e-8bcf-c685c2647543","Type":"ContainerDied","Data":"1df466c02fd4f94559832512a9e69f585e584a142e3e474d0c14a9fd204f5867"} Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.822096 4834 scope.go:117] "RemoveContainer" containerID="ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.849890 4834 scope.go:117] "RemoveContainer" containerID="857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.883691 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fj28v"] Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.884703 4834 scope.go:117] "RemoveContainer" containerID="ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.896144 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fj28v"] Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.907568 4834 scope.go:117] "RemoveContainer" containerID="ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240" Jan 21 17:24:09 crc kubenswrapper[4834]: E0121 17:24:09.908102 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240\": container with ID starting with ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240 not found: ID does not exist" containerID="ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.908155 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240"} err="failed to get container status \"ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240\": rpc error: code = NotFound desc = could not find container \"ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240\": container with ID starting with ec3a6feb789cab823dd3c300c27bf5da812523fcc828ac97752634a477477240 not found: ID does not exist" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.908194 4834 scope.go:117] "RemoveContainer" containerID="857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c" Jan 21 17:24:09 crc kubenswrapper[4834]: E0121 17:24:09.908687 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c\": container with ID starting with 857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c not found: ID does not exist" containerID="857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.908736 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c"} err="failed to get container status \"857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c\": rpc error: code = NotFound desc = could not find container \"857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c\": container with ID starting with 857c64133e5d3e00d9aa44831ad1c3d23f3572ca6b26c0ae9f676a64ad30518c not found: ID does not exist" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.908765 4834 scope.go:117] "RemoveContainer" containerID="ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1" Jan 21 17:24:09 crc kubenswrapper[4834]: E0121 17:24:09.909110 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1\": container with ID starting with ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1 not found: ID does not exist" containerID="ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1" Jan 21 17:24:09 crc kubenswrapper[4834]: I0121 17:24:09.909137 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1"} err="failed to get container status \"ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1\": rpc error: code = NotFound desc = could not find container \"ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1\": container with ID starting with ea1c58c59217e793fd77c22f623fc45859c487d5746a5819b4e90cb07e18d8e1 not found: ID does not exist" Jan 21 17:24:10 crc kubenswrapper[4834]: I0121 17:24:10.346579 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" path="/var/lib/kubelet/pods/480cf2ef-de8f-4c5e-8bcf-c685c2647543/volumes" Jan 21 17:24:10 crc kubenswrapper[4834]: I0121 17:24:10.348500 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" path="/var/lib/kubelet/pods/49f9c2b8-0a85-46a8-bd5a-d339757d7792/volumes" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.157584 4834 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxcnc"] Jan 21 17:25:22 crc kubenswrapper[4834]: E0121 17:25:22.159127 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerName="registry-server" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159143 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerName="registry-server" Jan 21 17:25:22 crc kubenswrapper[4834]: E0121 17:25:22.159154 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerName="registry-server" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159162 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerName="registry-server" Jan 21 17:25:22 crc kubenswrapper[4834]: E0121 17:25:22.159189 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerName="extract-content" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159199 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerName="extract-content" Jan 21 17:25:22 crc kubenswrapper[4834]: E0121 17:25:22.159211 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerName="extract-utilities" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159219 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerName="extract-utilities" Jan 21 17:25:22 crc kubenswrapper[4834]: E0121 17:25:22.159246 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerName="extract-utilities" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159254 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerName="extract-utilities" Jan 21 17:25:22 crc kubenswrapper[4834]: E0121 17:25:22.159274 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerName="extract-content" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159281 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerName="extract-content" Jan 21 17:25:22 crc kubenswrapper[4834]: E0121 17:25:22.159297 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerName="extract-utilities" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159304 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerName="extract-utilities" Jan 21 17:25:22 crc kubenswrapper[4834]: E0121 17:25:22.159316 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerName="extract-content" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159323 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerName="extract-content" Jan 21 17:25:22 crc kubenswrapper[4834]: E0121 17:25:22.159341 4834 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerName="registry-server" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159350 4834 state_mem.go:107] "Deleted CPUSet assignment" podUID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerName="registry-server" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159591 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f9c2b8-0a85-46a8-bd5a-d339757d7792" containerName="registry-server" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159614 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="44881270-6a88-41c6-ba63-f9fc0ca0e6e7" containerName="registry-server" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.159649 4834 memory_manager.go:354] "RemoveStaleState removing state" podUID="480cf2ef-de8f-4c5e-8bcf-c685c2647543" containerName="registry-server" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.161862 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.184161 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxcnc"] Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.311858 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-catalog-content\") pod \"redhat-marketplace-pxcnc\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.312042 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-utilities\") pod \"redhat-marketplace-pxcnc\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.312183 4834 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lss7d\" (UniqueName: \"kubernetes.io/projected/bb654002-aba1-4237-afd3-31b8452f2943-kube-api-access-lss7d\") pod \"redhat-marketplace-pxcnc\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.413880 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-utilities\") pod \"redhat-marketplace-pxcnc\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.414098 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lss7d\" (UniqueName: \"kubernetes.io/projected/bb654002-aba1-4237-afd3-31b8452f2943-kube-api-access-lss7d\") pod \"redhat-marketplace-pxcnc\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.414126 4834 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-catalog-content\") pod \"redhat-marketplace-pxcnc\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.415074 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-utilities\") pod \"redhat-marketplace-pxcnc\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.415576 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-catalog-content\") pod \"redhat-marketplace-pxcnc\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.438711 4834 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lss7d\" (UniqueName: \"kubernetes.io/projected/bb654002-aba1-4237-afd3-31b8452f2943-kube-api-access-lss7d\") pod \"redhat-marketplace-pxcnc\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:22 crc kubenswrapper[4834]: I0121 17:25:22.501621 4834 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:23 crc kubenswrapper[4834]: I0121 17:25:23.006565 4834 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxcnc"] Jan 21 17:25:23 crc kubenswrapper[4834]: W0121 17:25:23.702823 4834 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb654002_aba1_4237_afd3_31b8452f2943.slice/crio-fea958c7c259b95fc8d42280b5209a12b20d69f4e6091d7e7693276a12da17cf WatchSource:0}: Error finding container fea958c7c259b95fc8d42280b5209a12b20d69f4e6091d7e7693276a12da17cf: Status 404 returned error can't find the container with id fea958c7c259b95fc8d42280b5209a12b20d69f4e6091d7e7693276a12da17cf Jan 21 17:25:23 crc kubenswrapper[4834]: I0121 17:25:23.967143 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxcnc" event={"ID":"bb654002-aba1-4237-afd3-31b8452f2943","Type":"ContainerStarted","Data":"fea958c7c259b95fc8d42280b5209a12b20d69f4e6091d7e7693276a12da17cf"} Jan 21 17:25:25 crc kubenswrapper[4834]: I0121 17:25:25.002746 4834 generic.go:334] "Generic (PLEG): container finished" podID="bb654002-aba1-4237-afd3-31b8452f2943" containerID="ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52" exitCode=0 Jan 21 17:25:25 crc kubenswrapper[4834]: I0121 17:25:25.003246 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxcnc" event={"ID":"bb654002-aba1-4237-afd3-31b8452f2943","Type":"ContainerDied","Data":"ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52"} Jan 21 17:25:26 crc kubenswrapper[4834]: I0121 17:25:26.020075 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxcnc" event={"ID":"bb654002-aba1-4237-afd3-31b8452f2943","Type":"ContainerStarted","Data":"b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c"} Jan 21 17:25:27 crc kubenswrapper[4834]: I0121 17:25:27.035154 4834 generic.go:334] "Generic (PLEG): container finished" podID="bb654002-aba1-4237-afd3-31b8452f2943" containerID="b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c" exitCode=0 Jan 21 17:25:27 crc kubenswrapper[4834]: I0121 17:25:27.035203 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxcnc" event={"ID":"bb654002-aba1-4237-afd3-31b8452f2943","Type":"ContainerDied","Data":"b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c"} Jan 21 17:25:28 crc kubenswrapper[4834]: I0121 17:25:28.047559 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxcnc" event={"ID":"bb654002-aba1-4237-afd3-31b8452f2943","Type":"ContainerStarted","Data":"ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693"} Jan 21 17:25:28 crc kubenswrapper[4834]: I0121 17:25:28.074663 4834 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxcnc" podStartSLOduration=3.582417762 podStartE2EDuration="6.074643269s" podCreationTimestamp="2026-01-21 17:25:22 +0000 UTC" firstStartedPulling="2026-01-21 17:25:25.005440915 +0000 UTC m=+10470.979789960" lastFinishedPulling="2026-01-21 17:25:27.497666422 +0000 UTC m=+10473.472015467" observedRunningTime="2026-01-21 17:25:28.066190755 +0000 UTC m=+10474.040539820" watchObservedRunningTime="2026-01-21 17:25:28.074643269 +0000 UTC m=+10474.048992314" Jan 21 17:25:32 crc kubenswrapper[4834]: I0121 17:25:32.502388 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:32 crc kubenswrapper[4834]: I0121 17:25:32.503208 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:32 crc kubenswrapper[4834]: I0121 17:25:32.551572 4834 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:33 crc kubenswrapper[4834]: I0121 17:25:33.168400 4834 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:34 crc kubenswrapper[4834]: I0121 17:25:34.737200 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxcnc"] Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.118903 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pxcnc" podUID="bb654002-aba1-4237-afd3-31b8452f2943" containerName="registry-server" containerID="cri-o://ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693" gracePeriod=2 Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.617779 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.730717 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lss7d\" (UniqueName: \"kubernetes.io/projected/bb654002-aba1-4237-afd3-31b8452f2943-kube-api-access-lss7d\") pod \"bb654002-aba1-4237-afd3-31b8452f2943\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.730986 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-utilities\") pod \"bb654002-aba1-4237-afd3-31b8452f2943\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.731284 4834 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-catalog-content\") pod \"bb654002-aba1-4237-afd3-31b8452f2943\" (UID: \"bb654002-aba1-4237-afd3-31b8452f2943\") " Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.732404 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-utilities" (OuterVolumeSpecName: "utilities") pod "bb654002-aba1-4237-afd3-31b8452f2943" (UID: "bb654002-aba1-4237-afd3-31b8452f2943"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.747880 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb654002-aba1-4237-afd3-31b8452f2943-kube-api-access-lss7d" (OuterVolumeSpecName: "kube-api-access-lss7d") pod "bb654002-aba1-4237-afd3-31b8452f2943" (UID: "bb654002-aba1-4237-afd3-31b8452f2943"). InnerVolumeSpecName "kube-api-access-lss7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.759462 4834 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb654002-aba1-4237-afd3-31b8452f2943" (UID: "bb654002-aba1-4237-afd3-31b8452f2943"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.834337 4834 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.834389 4834 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb654002-aba1-4237-afd3-31b8452f2943-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:35 crc kubenswrapper[4834]: I0121 17:25:35.834412 4834 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lss7d\" (UniqueName: \"kubernetes.io/projected/bb654002-aba1-4237-afd3-31b8452f2943-kube-api-access-lss7d\") on node \"crc\" DevicePath \"\"" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.133374 4834 generic.go:334] "Generic (PLEG): container finished" podID="bb654002-aba1-4237-afd3-31b8452f2943" containerID="ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693" exitCode=0 Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.133438 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxcnc" event={"ID":"bb654002-aba1-4237-afd3-31b8452f2943","Type":"ContainerDied","Data":"ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693"} Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.133460 4834 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxcnc" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.133489 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxcnc" event={"ID":"bb654002-aba1-4237-afd3-31b8452f2943","Type":"ContainerDied","Data":"fea958c7c259b95fc8d42280b5209a12b20d69f4e6091d7e7693276a12da17cf"} Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.133521 4834 scope.go:117] "RemoveContainer" containerID="ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.187303 4834 scope.go:117] "RemoveContainer" containerID="b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.197243 4834 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxcnc"] Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.219051 4834 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxcnc"] Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.221227 4834 scope.go:117] "RemoveContainer" containerID="ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.288211 4834 scope.go:117] "RemoveContainer" containerID="ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693" Jan 21 17:25:36 crc kubenswrapper[4834]: E0121 17:25:36.289267 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693\": container with ID starting with ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693 not found: ID does not exist" containerID="ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.289328 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693"} err="failed to get container status \"ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693\": rpc error: code = NotFound desc = could not find container \"ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693\": container with ID starting with ff3023a1e19f3434a84e2f7538f2d0afdb31510df53a1630e6421d4ae40c7693 not found: ID does not exist" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.289368 4834 scope.go:117] "RemoveContainer" containerID="b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c" Jan 21 17:25:36 crc kubenswrapper[4834]: E0121 17:25:36.289755 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c\": container with ID starting with b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c not found: ID does not exist" containerID="b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.289807 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c"} err="failed to get container status \"b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c\": rpc error: code = NotFound desc = could not find container \"b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c\": container with ID starting with b49aaf1fea7d6b71a40288476dcfc114dd06afff490ead16c5de19f32176f15c not found: ID does not exist" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.289839 4834 scope.go:117] "RemoveContainer" containerID="ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52" Jan 21 17:25:36 crc kubenswrapper[4834]: E0121 17:25:36.290207 4834 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52\": container with ID starting with ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52 not found: ID does not exist" containerID="ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.290245 4834 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52"} err="failed to get container status \"ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52\": rpc error: code = NotFound desc = could not find container \"ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52\": container with ID starting with ce24882800b982ebb21dd39236d9842a36b407e2f731be4411eb7e1a030ade52 not found: ID does not exist" Jan 21 17:25:36 crc kubenswrapper[4834]: I0121 17:25:36.346105 4834 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb654002-aba1-4237-afd3-31b8452f2943" path="/var/lib/kubelet/pods/bb654002-aba1-4237-afd3-31b8452f2943/volumes" Jan 21 17:25:47 crc kubenswrapper[4834]: I0121 17:25:47.114755 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:25:47 crc kubenswrapper[4834]: I0121 17:25:47.115434 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:17 crc kubenswrapper[4834]: I0121 17:26:17.114565 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:26:17 crc kubenswrapper[4834]: I0121 17:26:17.115110 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:47 crc kubenswrapper[4834]: I0121 17:26:47.114202 4834 patch_prober.go:28] interesting pod/machine-config-daemon-86g84 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:26:47 crc kubenswrapper[4834]: I0121 17:26:47.114752 4834 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:26:47 crc kubenswrapper[4834]: I0121 17:26:47.114806 4834 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86g84" Jan 21 17:26:47 crc kubenswrapper[4834]: I0121 17:26:47.115841 4834 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec"} pod="openshift-machine-config-operator/machine-config-daemon-86g84" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:26:47 crc kubenswrapper[4834]: I0121 17:26:47.115919 4834 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerName="machine-config-daemon" containerID="cri-o://a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" gracePeriod=600 Jan 21 17:26:47 crc kubenswrapper[4834]: E0121 17:26:47.249495 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:26:48 crc kubenswrapper[4834]: I0121 17:26:48.251336 4834 generic.go:334] "Generic (PLEG): container finished" podID="4b9d51eb-93f7-4c89-8c91-258f908c766d" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" exitCode=0 Jan 21 17:26:48 crc kubenswrapper[4834]: I0121 17:26:48.251458 4834 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86g84" event={"ID":"4b9d51eb-93f7-4c89-8c91-258f908c766d","Type":"ContainerDied","Data":"a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec"} Jan 21 17:26:48 crc kubenswrapper[4834]: I0121 17:26:48.251771 4834 scope.go:117] "RemoveContainer" containerID="b0d258c290319e0061c7c26365fee62104726b31db58aaa12ae65800a2961c7e" Jan 21 17:26:48 crc kubenswrapper[4834]: I0121 17:26:48.252893 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:26:48 crc kubenswrapper[4834]: E0121 17:26:48.253474 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:27:02 crc kubenswrapper[4834]: I0121 17:27:02.326645 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:27:02 crc kubenswrapper[4834]: E0121 17:27:02.327316 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:27:14 crc kubenswrapper[4834]: I0121 17:27:14.325331 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:27:14 crc kubenswrapper[4834]: E0121 17:27:14.327352 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:27:25 crc kubenswrapper[4834]: I0121 17:27:25.324798 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:27:25 crc kubenswrapper[4834]: E0121 17:27:25.325465 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:27:40 crc kubenswrapper[4834]: I0121 17:27:40.325541 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:27:40 crc kubenswrapper[4834]: E0121 17:27:40.326550 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:27:54 crc kubenswrapper[4834]: I0121 17:27:54.330746 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:27:54 crc kubenswrapper[4834]: E0121 17:27:54.331495 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:28:07 crc kubenswrapper[4834]: I0121 17:28:07.325653 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:28:07 crc kubenswrapper[4834]: E0121 17:28:07.326495 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:28:20 crc kubenswrapper[4834]: I0121 17:28:20.324835 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:28:20 crc kubenswrapper[4834]: E0121 17:28:20.326405 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:28:34 crc kubenswrapper[4834]: I0121 17:28:34.340593 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:28:34 crc kubenswrapper[4834]: E0121 17:28:34.341337 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d" Jan 21 17:28:48 crc kubenswrapper[4834]: I0121 17:28:48.329140 4834 scope.go:117] "RemoveContainer" containerID="a406a22db22f46f4dc26f0d2dfa2dc8801e1778563197ae20923b74d935488ec" Jan 21 17:28:48 crc kubenswrapper[4834]: E0121 17:28:48.330126 4834 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86g84_openshift-machine-config-operator(4b9d51eb-93f7-4c89-8c91-258f908c766d)\"" pod="openshift-machine-config-operator/machine-config-daemon-86g84" podUID="4b9d51eb-93f7-4c89-8c91-258f908c766d"